repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
brclark-usgs/flopy
examples/Notebooks/flopy3_Zaidel_example.ipynb
bsd-3-clause
%matplotlib inline import os import sys import platform import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import flopy print(sys.version) print('numpy version: {}'.format(np.__version__)) print('matplotlib version: {}'.format(mpl.__version__)) print('flopy version: {}'.format(flopy.__version__)) #Set name of MODFLOW exe # assumes executable is in users path statement exe_name = 'mfusg' if platform.system() == 'Windows': exe_name += '.exe' mfexe = exe_name modelpth = os.path.join('data') modelname = 'zaidel' #make sure modelpth directory exists if not os.path.exists(modelpth): os.makedirs(modelpth) """ Explanation: FloPy MODFLOW-USG $-$ Discontinuous water table configuration over a stairway impervious base One of the most challenging numerical cases for MODFLOW arises from drying-rewetting problems often associated with abrupt changes in the elevations of impervious base of a thin unconfined aquifer. This problem simulates a discontinuous water table configuration over a stairway impervious base and flow between constant-head boundaries in column 1 and 200. This problem is based on Zaidel, J. (2013), Discontinuous Steady-State Analytical Solutions of the Boussinesq Equation and Their Numerical Representation by Modflow. Groundwater, 51: 952–959. doi: 10.1111/gwat.12019 The model consistes of a grid of 200 columns, 1 row, and 1 layer; a bottom altitude of ranging from 20 to 0 m; constant heads of 23 and 5 m in column 1 and 200, respectively; and a horizontal hydraulic conductivity of $1x10^{-4}$ m/d. The discretization is 5 m in the row direction for all cells. In this example results from MODFLOW-USG will be evaluated. End of explanation """ # model dimensions nlay, nrow, ncol = 1, 1, 200 delr = 50. delc = 1. # boundary heads h1 = 23. h2 = 5. # cell centroid locations x = np.arange(0., float(ncol)*delr, delr) + delr / 2. # ibound ibound = np.ones((nlay, nrow, ncol), dtype=np.int) ibound[:, :, 0] = -1 ibound[:, :, -1] = -1 # bottom of the model botm = 25 * np.ones((nlay + 1, nrow, ncol), dtype=np.float) base = 20. for j in range(ncol): botm[1, :, j] = base #if j > 0 and j % 40 == 0: if j+1 in [40,80,120,160]: base -= 5 # starting heads strt = h1 * np.ones((nlay, nrow, ncol), dtype=np.float) strt[:, :, -1] = h2 """ Explanation: Model parameters End of explanation """ #make the flopy model mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth) dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol, delr=delr, delc=delc, top=botm[0, :, :], botm=botm[1:, :, :], perlen=1, nstp=1, steady=True) bas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=strt) lpf = flopy.modflow.ModflowLpf(mf, hk=0.0001, laytyp=4) oc = flopy.modflow.ModflowOc(mf, stress_period_data={(0,0): ['print budget', 'print head', 'save head', 'save budget']}) sms = flopy.modflow.ModflowSms(mf, nonlinmeth=1, linmeth=1, numtrack=50, btol=1.1, breduc=0.70, reslim = 0.0, theta=0.85, akappa=0.0001, gamma=0., amomentum=0.1, iacl=2, norder=0, level=5, north=7, iredsys=0, rrctol=0., idroptol=1, epsrn=1.e-5, mxiter=500, hclose=1.e-3, hiclose=1.e-3, iter1=50) mf.write_input() # remove any existing head files try: os.remove(os.path.join(model_ws, '{0}.hds'.format(modelname))) except: pass # run the model mf.run_model() """ Explanation: Create and run the MODFLOW-USG model End of explanation """ # Create the mfusg headfile object headfile = os.path.join(modelpth, '{0}.hds'.format(modelname)) headobj = flopy.utils.HeadFile(headfile) times = headobj.get_times() mfusghead = headobj.get_data(totim=times[-1]) """ Explanation: Read the simulated MODFLOW-USG model results End of explanation """ fig = plt.figure(figsize=(8,6)) fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=0.25, hspace=0.25) ax = fig.add_subplot(1, 1, 1) ax.plot(x, mfusghead[0, 0, :], linewidth=0.75, color='blue', label='MODFLOW-USG') ax.fill_between(x, y1=botm[1, 0, :], y2=-5, color='0.5', alpha=0.5) leg = ax.legend(loc='upper right') leg.draw_frame(False) ax.set_xlabel('Horizontal distance, in m') ax.set_ylabel('Head, in m') ax.set_ylim(-5,25); """ Explanation: Plot MODFLOW-USG results End of explanation """
diegocavalca/Studies
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
cc0-1.0
import time import numpy as np import h5py import matplotlib.pyplot as plt import scipy from PIL import Image from scipy import ndimage from dnn_app_utils_v2 import * %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1) """ Explanation: Deep Neural Network for Image Classification: Application When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course! You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation. After this assignment you will be able to: - Build and apply a deep neural network to supervised learning. Let's get started! 1 - Packages Let's first import all the packages that you will need during this assignment. - numpy is the fundamental package for scientific computing with Python. - matplotlib is a library to plot graphs in Python. - h5py is a common package to interact with a dataset that is stored on an H5 file. - PIL and scipy are used here to test your model with your own picture at the end. - dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook. - np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. End of explanation """ train_x_orig, train_y, test_x_orig, test_y, classes = load_data() """ Explanation: 2 - Dataset You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better! Problem Statement: You are given a dataset ("data.h5") containing: - a training set of m_train images labelled as cat (1) or non-cat (0) - a test set of m_test images labelled as cat and non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Let's get more familiar with the dataset. Load the data by running the cell below. End of explanation """ # Example of a picture index = 73 plt.imshow(train_x_orig[index]) print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.") # Explore your dataset m_train = train_x_orig.shape[0] num_px = train_x_orig.shape[1] m_test = test_x_orig.shape[0] print ("Number of training examples: " + str(m_train)) print ("Number of testing examples: " + str(m_test)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_x_orig shape: " + str(train_x_orig.shape)) print ("train_y shape: " + str(train_y.shape)) print ("test_x_orig shape: " + str(test_x_orig.shape)) print ("test_y shape: " + str(test_y.shape)) """ Explanation: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images. End of explanation """ # Reshape the training and test examples train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T # Standardize data to have feature values between 0 and 1. train_x = train_x_flatten/255. test_x = test_x_flatten/255. print ("train_x's shape: " + str(train_x.shape)) print ("test_x's shape: " + str(test_x.shape)) """ Explanation: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below. <img src="images/imvectorkiank.png" style="width:450px;height:300px;"> <caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption> End of explanation """ ### CONSTANTS DEFINING THE MODEL #### n_x = 12288 # num_px * num_px * 3 n_h = 7 n_y = 1 layers_dims = (n_x, n_h, n_y) # GRADED FUNCTION: two_layer_model def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False): """ Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (n_x, number of examples) Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) layers_dims -- dimensions of the layers (n_x, n_h, n_y) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- If set to True, this will print the cost every 100 iterations Returns: parameters -- a dictionary containing W1, W2, b1, and b2 """ np.random.seed(1) grads = {} costs = [] # to keep track of the cost m = X.shape[1] # number of examples (n_x, n_h, n_y) = layers_dims # Initialize parameters dictionary, by calling one of the functions you'd previously implemented ### START CODE HERE ### (≈ 1 line of code) parameters = initialize_parameters(n_x, n_h, n_y) ### END CODE HERE ### # Get W1, b1, W2 and b2 from the dictionary parameters. W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2". ### START CODE HERE ### (≈ 2 lines of code) A1, cache1 = linear_activation_forward(X, W1, b1, 'relu') A2, cache2 = linear_activation_forward(A1, W2, b2, 'sigmoid') ### END CODE HERE ### # Compute cost ### START CODE HERE ### (≈ 1 line of code) cost = compute_cost(A2, Y) ### END CODE HERE ### # Initializing backward propagation dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2)) # Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1". ### START CODE HERE ### (≈ 2 lines of code) dA1, dW2, db2 = linear_activation_backward(dA2, cache2, 'sigmoid') dA0, dW1, db1 = linear_activation_backward(dA1, cache1, 'relu') ### END CODE HERE ### # Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2 grads['dW1'] = dW1 grads['db1'] = db1 grads['dW2'] = dW2 grads['db2'] = db2 # Update parameters. ### START CODE HERE ### (approx. 1 line of code) parameters = update_parameters(parameters, grads, learning_rate) ### END CODE HERE ### # Retrieve W1, b1, W2, b2 from parameters W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] # Print the cost every 100 training example if print_cost and i % 100 == 0: print("Cost after iteration {}: {}".format(i, np.squeeze(cost))) if print_cost and i % 100 == 0: costs.append(cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters """ Explanation: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector. 3 - Architecture of your model Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images. You will build two different models: - A 2-layer neural network - An L-layer deep neural network You will then compare the performance of these models, and also try out different values for $L$. Let's look at the two architectures. 3.1 - 2-layer neural network <img src="images/2layerNN_kiank.png" style="width:650px;height:400px;"> <caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT. </center></caption> <u>Detailed Architecture of figure 2</u>: - The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$. - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$. - You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$. - You then repeat the same process. - You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias). - Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat. 3.2 - L-layer deep neural network It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation: <img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;"> <caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID</center></caption> <u>Detailed Architecture of figure 3</u>: - The input is a (64,64,3) image which is flattened to a vector of size (12288,1). - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit. - Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture. - Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat. 3.3 - General methodology As usual you will follow the Deep Learning methodology to build the model: 1. Initialize parameters / Define hyperparameters 2. Loop for num_iterations: a. Forward propagation b. Compute cost function c. Backward propagation d. Update parameters (using parameters, and grads from backprop) 4. Use trained parameters to predict labels Let's now implement those two models! 4 - Two-layer neural network Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are: python def initialize_parameters(n_x, n_h, n_y): ... return parameters def linear_activation_forward(A_prev, W, b, activation): ... return A, cache def compute_cost(AL, Y): ... return cost def linear_activation_backward(dA, cache, activation): ... return dA_prev, dW, db def update_parameters(parameters, grads, learning_rate): ... return parameters End of explanation """ parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True) """ Explanation: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error. End of explanation """ predictions_train = predict(train_x, train_y, parameters) """ Explanation: Expected Output: <table> <tr> <td> **Cost after iteration 0**</td> <td> 0.6930497356599888 </td> </tr> <tr> <td> **Cost after iteration 100**</td> <td> 0.6464320953428849 </td> </tr> <tr> <td> **...**</td> <td> ... </td> </tr> <tr> <td> **Cost after iteration 2400**</td> <td> 0.048554785628770206 </td> </tr> </table> Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this. Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below. End of explanation """ predictions_test = predict(test_x, test_y, parameters) """ Explanation: Expected Output: <table> <tr> <td> **Accuracy**</td> <td> 1.0 </td> </tr> </table> End of explanation """ ### CONSTANTS ### layers_dims = [12288, 20, 7, 5, 1] # 5-layer model # GRADED FUNCTION: L_layer_model def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009 """ Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID. Arguments: X -- data, numpy array of shape (number of examples, num_px * num_px * 3) Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) layers_dims -- list containing the input size and each layer size, of length (number of layers + 1). learning_rate -- learning rate of the gradient descent update rule num_iterations -- number of iterations of the optimization loop print_cost -- if True, it prints the cost every 100 steps Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ np.random.seed(1) costs = [] # keep track of cost # Parameters initialization. ### START CODE HERE ### parameters = initialize_parameters_deep(layers_dims) ### END CODE HERE ### # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID. ### START CODE HERE ### (≈ 1 line of code) AL, caches = L_model_forward(X, parameters) ### END CODE HERE ### # Compute cost. ### START CODE HERE ### (≈ 1 line of code) cost = compute_cost(AL, Y) ### END CODE HERE ### # Backward propagation. ### START CODE HERE ### (≈ 1 line of code) grads = L_model_backward(AL, Y, caches) ### END CODE HERE ### # Update parameters. ### START CODE HERE ### (≈ 1 line of code) parameters = update_parameters(parameters, grads, learning_rate) ### END CODE HERE ### # Print the cost every 100 training example if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) if print_cost and i % 100 == 0: costs.append(cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters """ Explanation: Expected Output: <table> <tr> <td> **Accuracy**</td> <td> 0.72 </td> </tr> </table> Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting. Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model. 5 - L-layer Neural Network Question: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: [LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are: python def initialize_parameters_deep(layer_dims): ... return parameters def L_model_forward(X, parameters): ... return AL, caches def compute_cost(AL, Y): ... return cost def L_model_backward(AL, Y, caches): ... return grads def update_parameters(parameters, grads, learning_rate): ... return parameters End of explanation """ parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True) """ Explanation: You will now train the model as a 5-layer neural network. Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error. End of explanation """ pred_train = predict(train_x, train_y, parameters) """ Explanation: Expected Output: <table> <tr> <td> **Cost after iteration 0**</td> <td> 0.771749 </td> </tr> <tr> <td> **Cost after iteration 100**</td> <td> 0.672053 </td> </tr> <tr> <td> **...**</td> <td> ... </td> </tr> <tr> <td> **Cost after iteration 2400**</td> <td> 0.092878 </td> </tr> </table> End of explanation """ pred_test = predict(test_x, test_y, parameters) """ Explanation: <table> <tr> <td> **Train Accuracy** </td> <td> 0.985645933014 </td> </tr> </table> End of explanation """ print_mislabeled_images(classes, test_x, test_y, pred_test) """ Explanation: Expected Output: <table> <tr> <td> **Test Accuracy**</td> <td> 0.8 </td> </tr> </table> Congrats! It seems that your 5-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set. This is good performance for this task. Nice job! Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course). 6) Results Analysis First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images. End of explanation """ ## START CODE HERE ## my_image = "my_image.jpg" # change this to the name of your image file my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat) ## END CODE HERE ## fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1)) my_predicted_image = predict(my_image, my_label_y, parameters) plt.imshow(image) print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.") """ Explanation: A few type of images the model tends to do poorly on include: - Cat body in an unusual position - Cat appears against a background of a similar color - Unusual cat color and species - Camera Angle - Brightness of the picture - Scale variation (cat is very large or small in image) 7) Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)! End of explanation """
WNoxchi/Kaukasos
FADL1/L7CA_lesson7-cifar10.ipynb
mit
%matplotlib inline %reload_ext autoreload %autoreload 2 """ Explanation: CIFAR 10 21 Jan 2018 22 Jan 2018 End of explanation """ from fastai.conv_learner import * PATH = "data/cifar10/" os.makedirs(PATH, exist_ok=True) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') stats = (np.array([ 0.4914 , 0.48216, 0.44653]), np.array([ 0.24703, 0.24349, 0.26159])) def get_data(sz,bs): tfms = tfms_from_stats(stats, sz, aug_tfms=[RandomFlip()], pad=sz//8) return ImageClassifierData.from_paths(PATH, val_name='test', tfms=tfms, bs=bs) bs=256 """ Explanation: You can get the data via: wget http://pjreddie.com/media/files/cifar.tgz End of explanation """ data = get_data(32, 4) x,y = next(iter(data.trn_dl)) plt.imshow(data.trn_ds.denorm(x)[0]); plt.imshow(data.trn_ds.denorm(x)[3]); """ Explanation: Look at dem der data End of explanation """ data = get_data(32,bs) lr=1e-2 """ Explanation: Fully Connected Model End of explanation """ class SimpleNet(nn.Module): def __init__(self, layers): super().__init__() self.layers = nn.ModuleList([ nn.Linear(layers[i], layers[i+1]) for i in range(len(layers) - 1)]) def forward(self, x): x = x.view(x.size(0), -1) for λ in self.layers: λ_x = λ(x) x = F.relu(λ_x) return F.log_softmax(λ_x, dim=-1) learn = ConvLearner.from_model_data(SimpleNet([32*32*3, 40, 10]), data) learn, [o.numel() for o in learn.model.parameters()] [o for o in learn.model.parameters()] learn.summary() learn.lr_find() learn.sched.plot() %time learn.fit(lr,2) %time learn.fit(lr, 2, cycle_len=1) """ Explanation: From this notebook by K.Turgutlu. End of explanation """ class ConvNet(nn.Module): def __init__(self, layers, c): super().__init__() self.layers = nn.ModuleList([ nn.Conv2d(layers[i], layers[i + 1], kernel_size=3, stride=2) for i in range(len(layers) - 1)]) self.pool = nn.AdaptiveMaxPool2d(1) self.out = nn.Linear(layers[-1], c) def forward(self, x): for λ in self.layers: x = F.relu(λ(x)) x = self.pool(x) x = x.view(x.size(0), -1) return F.log_softmax(self.out(x), dim=-1) """ Explanation: The goal is to basically replicated the basic architecture of a ResNet. The simple model above gets an accuracy of around 47%, with 120,000 parameters. Not great. We're deffinitely not using our parameters very well -- they're treating each pixel with a different weight. Instead we'll find groups of 3x3 pixels with particular patterns, using a ConvNet. 3. CNN The first step is to replace our FullNet model with a ConvNet model. End of explanation """ learn = ConvLearner.from_model_data(ConvNet([3, 20, 40, 80], 10), data) """ Explanation: nn.Conv2d(layers[i], layers[i + 1], kernel_size=3, stride=2): the first two pars are exactly the same as nn.Linear: num_features_in, num_features_out End of explanation """ learn.summary() """ Explanation: learn = ConvLearner.from_model_data(ConvNet([3, 20, 40, 80], 10), data): 3 channels coming in; 1st layer comes out with 20, 2nd with 40, 3rd: 80. End of explanation """ learn.lr_find(end_lr=100) learn.sched.plot() %time learn.fit(1e-1, 2) %time learn.fit(1e-1, 4, cycle_len=1) """ Explanation: To turn the output of the ConvNet into a prediction of one of ten classes is use Adaptive Max Pooling. Standard now for SotA algorithms. A Max Pool is done on the very last layer. Instead of a 2x2 or 3x3 or X-by-X, in Adaptive Max Pooling, we don't tell the algorithm how big an area to pool, instead we tell it how big a resolution to create. So doing a 14x14 adaptive max pool on a 28x28 input image (like CIFAR-10 in this case) is the same as a 2x2 Max Pool. A 2x2 AMP would be the same as a 14x14 MP on a 28x28 image. What we pretty much always do in modern CNNs is make the penultimate layer a 1x1 Adaptive Max Pool. ie: find the single largest cell and use that as our new activation. That gives us a 1x1xNum_Features Tensor that we can send into our FullNet. Then we do: x = x.view(x.size(0), -1) which returns a matrix of Mini_Batch x Num_Features. We can feed that into a linear layer, with however many classes we need. End of explanation """ class ConvLayer(nn.Module): def __init__(self, ni, nf): super().__init__() self.conv = nn.Conv2d(ni, nf, kernel_size=3, stride=2, padding=1) def forward(self, x): return F.relu(self.conv(x)) class ConvNet2(nn.Module): def __init__(self, layers, c): super().__init__() self.layers = nn.ModuleList([ConvLayer(layers[i], layers[i + 1]) for i in range(len(layers) - 1)]) self.out = nn.Linear(layers[-1], c) def forward(self, x): for λ in self.layers: x = λ(x) x = F.adaptive_max_pool2d(x, 1) # F is nn.Functional x = x.view(x.size(0), -1) return F.log_softmax(self.out(x), dim=-1) """ Explanation: We have around 30,000 parameters in the ConvNet, about a quarter that in the simple FullNet, and our accuracy is around 57%, up from 47%. 4. Refactored We're going to refactor the ConvNet slightly so that we put less stuff in the forward pass. For instance, calling relu each loop isn't ideal. So we'll create a new class called ConvLayer which contains a convolution with a kernel size of 3 and a stride of two, and with padding. Padding becomes especially important when you're down to small layer sizes in later convolutions where throwing away a potential convolution around the edge will lose a significant amount of information. The relu will be inside the ConvLayer class, making it easier to edit and prevent bugs. End of explanation """ learn = ConvLearner.from_model_data(ConvNet2([3, 20, 40, 80], 10), data) learn.summary() %time learn.fit(1e-1, 2) %time learn.fit(1e-1, 2, cycle_len=1) """ Explanation: What's awesome about PyTorch is that a Layer definition and a Neural Network definition are literally identical. They both have a Constructor, and a Forward. Any time you have a layer, you can use it as a neural net, and vice versa. Also, since AMP has no state (no weights), we don't need to have it as an attribute as in the ConvNet class up above, we can instead call it as a function in the Forward method of ConvNet2. End of explanation """ class BnLayer(nn.Module): def __init__(self, ni, nf, stride=2, kernel_size=3): super().__init__() self.conv = nn.Conv2d(ni, nf, kernel_size=kernel_size, stride=stride, bias=False, padding=1) self.a = nn.Parameter(torch.zeros(nf,1,1)) self.m = nn.Parameter(torch.ones(nf, 1,1)) def forward(self, x): x = F.relu(self.conv(x)) x_chan = x.transpose(0,1).contiguous().view(x.size(1), -1) if self.training: # true for training set. false for val set. self.means = x_chan.mean(1)[:, None, None] self.stds = x_chan.std(1)[:, None, None] return (x-self.means) / self.stds * self.m + self.a """ Explanation: 5. BatchNorm An issue up above, is that we're having trouble training the ConvNet as we add more layers. If we use larger learningRates, we get NaNs (Infinities), and smaller lrs take forever and doesn't have a chance to explore properly. So it isn't resilient. To make the model more resilient, we'll use Batched Normalization. BatchNorm is a couple years old now (2018), and makes it much easier to train deep networks. The network we're going to create will have more layers: 5 Conv layers and 1 Full layer. Back in the day that'd be considered a pretty deep network and would be hard to train. It's very simple now thanks to Batch Norm. Batch Norm can be used by calling nn.BatchNorm.., we'll write it from scratch to learn about it. End of explanation """ class ConvBnNet(nn.Module): def __init__(self, layers, c): super().__init__() self.conv1 = nn.Conv2d(3, 10, kernel_size=5, stride=1, padding=2) self.layers = nn.ModuleList([BnLayer(layers[i], layers[i + 1]) for i in range(len(layers) - 1)]) self.out = nn.Linear(layers[-1], c) def forward(self, x): x = self.conv1(x) for λ in self.layers: x = λ(x) x = F.adaptive_max_pool2d(x, 1) x = x.view(x.size(0), -1) return F.log_softmax(self.out(x), dim=-1) """ Explanation: this is normalizing our input automatically per channel, and for later layers: per filter. but this isn't enough because SGD is a bloody-minded soldier. It will keep changing the activations to what it thinks they should be each minibatch, BatchNorm be damned. In fact, that last line on it's own: (x-self.means) / self.std literally does nothing, because SGD just undoes it the next minibatch. So what we do, is create a new multiplier for each channel and a new added value, self.m & self.a -- the adder is just 3 zeros, self.a = nn.Parameter(torch.zeros(nf,1,1)), and the multiplier is just 3 ones: self.m = nn.Parameter(torch.ones(nf, 1,1)) -- nf is number of filters, 3 in our case -- We set those to be parameters. By specifying them as nn.Paramter(..) we tell PyTorch it's allowed to learn these as weights. So initially it subtracts the Means, divides the Standard Deviations, multiplies by Ones, and adds Zeroes. Nothing much happens. Now, though, when SGD wants scale the layer up, it doesn't have to scale up every value in the matrix: it can just scale up the single trio of numbers self.m, the multiplier. Likewise if it wants to shift the matrix activations up or down a bit, it doesn't have to shift the entire weight matrix: it can just shift the trio of numbers self.a, the adder. We're normalizing the data, then saying you can shift and scale it using far fewer parameters than would've been necessary if I was asking you to shift and scale the entire set of Conv filters. In practice what this does is allow us to increase our learning rates and increase the resilience of training and allows us to add more layers. In this case, adding BNLayer instead of the original ConvLayer to the model allows us to add more layers (the 80 and 160 below in the learner), and still train it effectively. Another great thing BatchNorm does is regularize the network. IoW: you can decrease or remove dropout and weightdecay. The reason why is that each minibatch is going to have a different mean and standard deviation; so they keep changing --> this keeps changing the meaning of the filters --> this has a regularizing effect because it's noise. When you add noise of any kind, it regularizes your model. End of explanation """ learn = ConvLearner.from_model_data(ConvBnNet([10, 20, 40, 80, 160], 10), data) learn.summary() %time learn.fit(3e-2, 2) %time learn.fit(1e-1, 4, cycle_len=1) t1 = [chr(ord('a')+i) for i in range(10)] t2 = [chr(ord('ა')+i) for i in range(10)] for a,b in zip(t1, t2): print(a) print(b) """ Explanation: NOTE: this is a simplified version of BatchNorm! In real BatchNorm, instead of just taking the mean & stddev of the minibatch, you take an exponentially weighted moving average stddev & mean. A change in ConvBnNet -- in line with modern approaches -- is the addition of a single Conv layer in the beginning with a kernel size of 5 and a stride of 1. The reason is to make sure the first layer has a richer input: sampling from a larger area. This first layer outputs 10 5x5 filters. Padding size is kernel size - 1 / 2 = 2. End of explanation """ class ConvBnNet2(nn.Module): def __init__(self, layers, c): super().__init__() self.conv1 = nn.Conv2d(3, 10, kernel_size=5, stride=1, padding=2) self.layers = nn.ModuleList([BnLayer(layers[i], layers[i+1]) for i in range(len(layers) - 1)]) self.layers2 = nn.ModuleList([BnLayer(layers[i+1], layers[i+1], 1) for i in range(len(layers) - 1)]) self.out = nn.Linear(layers[-1], c) def forward(self, x): x = self.conv1(x) for λ, λ2 in zip(self.layers, self.layers2): x = λ(x) x = λ2(x) x = F.adaptive_max_pool2d(x, 1) x = x.view(x.size(0), -1) return F.log_softmax(self.out(x), dim=-1) learn = ConvLearner.from_model_data(ConvBnNet2([10,20,40,80,160],10), data) %time learn.fit(1e-2, 2) %time learn.fit(1e-2, 2, cycle_len=1) """ Explanation: 6 Deep BatchNorm Take a look at the accuracy rise (note val-loss < trn-loss, signaling no overfitting yet) from 47% -> 57% before, up to 70%. Woo! Okay, personal note: THIS IS SO MUCH EASIER THAN I IMAGINED. So, given that this is looking so good, and obvious thing to try increasing the depth of the model. We can't just add more of our stride-2 layers, because they halve the size each time (we're down to 2x2 by the end), so instead we create a stride-1 layer (no size-change) for each stride-2 layer created. Then zip the stride2 & stride1 layers together ( s-2 first ), which gives us alternating stride 2, 1 layers. This however, doesn't help because the model is now too deep for even batch norm to handle on it's own (12 layers (start-Conv, 10 S2-S1 Convs, 1 Linear)) End of explanation """ class ResnetLayer(BnLayer): def forward(self, x): return x + super().forward(x) class Resnet(nn.Module): def __init__(self, layers, c): super().__init__() self.conv1 = nn.Conv2d(3, 10, kernel_size=5, stride=1, padding=2) self.layers = nn.ModuleList([BnLayer(layers[i], layers[i+1]) for i in range(len(layers) - 1)]) self.layers2 = nn.ModuleList([ResnetLayer(layers[i+1], layers[i + 1], 1) for i in range(len(layers) - 1)]) self.layers3 = nn.ModuleList([ResnetLayer(layers[i+1], layers[i + 1], 1) for i in range(len(layers) - 1)]) self.out = nn.Linear(layers[-1], c) def forward(self, x): x = self.conv1(x) for λ,λ2,λ3 in zip(self.layers, self.layers2, self.layers3): x = λ3(λ2(λ(x))) # function of a function of a function x = F.adaptive_max_pool2d(x, 1) x = x.view(x.size(0), -1) return F.log_softmax(self.out(x), dim=-1) """ Explanation: Notice making the model deeper hasn't helped. It's possible to train a standard ConvNet 12 layers deep, but it's hard to do properly. Instead we're going to replace the ConvNet with a ResNet. 7. ResNet The ResnetLayer class is going to inherit from BnLayer and replace our forward with return x + super().forward(x). And that's it. Everything else is going to be identical -- except that we're now going to make the network 4x deeper. End of explanation """ learn = ConvLearner.from_model_data(Resnet([10,20,40,80,160], 10), data) wd=1e-5 %time learn.fit(1e-2, 2, wds=wd) %time learn.fit(1e-2, 3, cycle_len=1, cycle_mult=2, wds=wd) %time learn.fit(1e-2, 8, cycle_len=4, wds=wd) """ Explanation: And now this model is going to train beautifully just because of that one line: def forward(self, x): return x + super().forward(x). Why is that? This is called a ResNet block. It says its prediction $$y = x + f(x)$$ in this case the function is a convolution. Which is also saying: $$f(x) = y - x$$, where f(x) is the current layer's prediction, y is the prediction from the previous layer. What it's doing is trying to fit a function f to the difference between y and x. That difference is the residual. If y is what I'm trying to calculate, and x is the thing I've most recently calculated (input to current layer), then the difference between the two is essentially the error ito what I've calc'd so far. So this is saying attempt to find a set of convolutional weights that attempts to fill in the amount I was off by. Lecture at ~ 1:55:00 End of explanation """ class Resnet2(nn.Module): def __init__(self, layers, c, p=0.5): super().__init__() self.conv1 = BnLayer(3, 16, stride=1, kernel_size=7) self.layers = nn.ModuleList([BnLayer(layers[i], layers[i+1]) for i in range(len(layers) - 1)]) self.layers2 = nn.ModuleList([ResnetLayer(layers[i+1], layers[i + 1], 1) for i in range(len(layers) - 1)]) self.layers3 = nn.ModuleList([ResnetLayer(layers[i+1], layers[i + 1], 1) for i in range(len(layers) - 1)]) self.out = nn.Linear(layers[-1], c) self.drop = nn.Dropout(p) # dropout added def forward(self, x): x = self.conv1(x) for λ,λ2,λ3 in zip(self.layers, self.layers2, self.layers3): x = λ3(λ2(λ(x))) x = F.adaptive_max_pool2d(x, 1) x = x.view(x.size(0), -1) x = self.drop(x) return F.log_softmax(self.out(x), dim=-1) """ Explanation: The idea is, if we have some inputs coming in and a function trying to predict how much the error is, then add on another prediction of error at that new stage, and on and on, then each time we're zooming in closer and closer to the correct answer -- ie: we've gotten to a certain point but there's still an error, a residual, so let's create a model that predicts that residual and add that onto our previous model, and another model that predicts that residual, and adds that on, and etc. If you keep doing that over and over, you should get closer and closer to our answer. This is based on the theory of Boosting. By specifying return x + super().forward(x) as the thing we're trying to calculate, then we're kind of getting boosting for free. Note that here, only one convolution is done in the ResNet block. Actual standard ResNet blocks use two convolutions before adding back onto the input. Note also that the first layer in every block is a standard Conv layer w/ a stride of two, not a Res layer. This is a bottleneck layer. From time to time we change the geometry in a ResNet model. Actual ResNets don't use a standard Conv layer for bottlenecks; but that'll be covered in Part 2 of this course. Still, this simplified ResNet gets up to 82% accuracy (incl. overfitting). 8. ResNet2 We can make the Resnet even bigger End of explanation """ # all sizes increased; 0.2 dropout learn = ConvLearner.from_model_data(Resnet2([16, 32, 64, 128, 256], 10, 0.2), data) wd=1e-6 %time learn.fit(1e-2, 2, wds=wd) %time learn.fit(1e-2, 3, cycle_len=1, cycle_mult=2, wds=wd) %time learn.fit(1e-2, 8, cycle_len=4, wds=wd) learn.save('tmp') log_preds,y = learn.TTA() preds = np.mean(np.exp(log_preds), 0) metrics.log_loss(y,preds), accuracy(preds,y) """ Explanation: Other than the minor simplification of ResNet, this is a reasonable approximation of a good starting point for a modern architecture. End of explanation """
Tahsin-Mayeesha/Udacity-Machine-Learning-Nanodegree
projects/creating_customer_segments/customer_segments.ipynb
mit
# Import libraries necessary for this project import numpy as np import pandas as pd import renders as rs import seaborn as sns from IPython.display import display # Allows the use of display() for DataFrames # Show matplotlib plots inline (nicely formatted in the notebook) %matplotlib inline # Load the wholesale customers dataset try: data = pd.read_csv("customers.csv") data.drop(['Region', 'Channel'], axis = 1, inplace = True) print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape) except: print "Dataset could not be loaded. Is the dataset missing?" data.head() """ Explanation: Machine Learning Engineer Nanodegree Unsupervised Learning Project 3: Creating Customer Segments Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting Started In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer. The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers. Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported. End of explanation """ # Display a description of the dataset display(data.describe()) print data.loc[data['Milk']>20000] """ Explanation: Data Exploration In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project. Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase. End of explanation """ # TODO: Select three indices of your choice you wish to sample from the dataset indices = [100,183,309] # Create a DataFrame of the chosen samples samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True) print "Chosen samples of wholesale customers dataset:" display(samples) for column in data.columns: print column + " " + str(data[column].mean()) # Code from reviewer suggestion samples_bar = samples.append(data.describe().loc['mean']) samples_bar.index = indices + ['mean'] _ = samples_bar.plot(kind='bar', figsize=(14,6)) """ Explanation: Implementation: Selecting Samples To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another. End of explanation """ # Trying with dropping all the features one by one and checking the R^S score. from sklearn.cross_validation import train_test_split from sklearn.tree import DecisionTreeRegressor columns = data.columns for column in columns: # TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature new_data = data.drop(column,axis = 1) target_label = data[column] # TODO: Split the data into training and testing sets using the given feature as the target X_train, X_test, y_train, y_test = train_test_split(new_data,target_label,test_size = 0.25, random_state = 0) # TODO: Create a decision tree regressor and fit it to the training set regressor = DecisionTreeRegressor(random_state = 0) regressor.fit(X_train,y_train) # TODO: Report the score of the prediction using the testing set score = regressor.score(X_test,y_test) print "When Removed Feature is " + str(column) + " R^2 score is " + str(round(score,3)) """ Explanation: Question 1 Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers. What kind of establishment (customer) could each of the three samples you've chosen represent? Hint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant. Answer: Sample 0 has average spending on Fresh food, slightly above average on milk and grocery, similar to average spending on Frozen but unusually high spending on detergent paper, it could be a hotel with many rooms which require a lot of cleaning supplies but has an in-house restaurant, or it could be a retailer which supplies cleaning products to other smaller establishments. Sample 1 has above average spending on all the features except for Detergents, but spends the most on Delicatessen, this could be a large supermarket with a deli department. Or it could also be a big deli restaurant given the low spending on detergents. Clearly they are expecting people to buy food related products compared to cleaning supplies. Sample 2 has much below average spending on fresh product,relatively high spending on Milk and grocery, unusually low spending on Frozen and Delicatessen and pretty high spending on Detergents. Given high spenidng on Milk, grocery and Detergents I think it's a small supermarket with a bakery department. Implementation: Feature Relevance One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature. In the code block below, you will need to implement the following: - Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function. - Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets. - Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state. - Import a decision tree regressor, set a random_state, and fit the learner to the training data. - Report the prediction score of the testing set using the regressor's score function. End of explanation """ # Produce a scatter matrix for each pair of features in the data pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde'); """ Explanation: Based on the R^2 scores, I choose Delicatessen as the dropped feature. Question 2 Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits? Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data. Answer: Feature chosen to drop was Delicatessen and it appears the Decision Tree Regressor failed to predict from the purchase of other products how much a particular customer will spend on Deli products. If we drop this feature, we will lose information given spending on other features did not have much correlation with this feature. The reported prediction score was R^2 = -11.664 which means the model absolutely failed to fit the data.We can say that without this feature, we will never know from the other features how much a customer would have spent on Deli products, given lack of correlation between this feature and the other one. Visualize Feature Distributions To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix. End of explanation """ # TODO: Scale the data using the natural logarithm log_data = np.log(data) # TODO: Scale the sample data using the natural logarithm log_samples = np.log(samples) # Produce a scatter matrix for each pair of newly-transformed features pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde'); """ Explanation: Question 3 Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed? Hint: Is the data normally distributed? Where do most of the data points lie? Answer: From the visualization the observed pairs are : 1. Grocery and Detergent paper, 2. Grocery and Milk, 3. Milk and Detergent Paper. These pairings are not really relevant for my dropped feature that I attempted to predict given there's not much correlation between the dropped feature Delicatessen and the other features. Most consumers spend relatively low amount on Delicatessen with an average of 1524.87 m.u (monetary unit) with one outlier who purchase a very high amount of deli products(above 40000) and a few other one's above 10000. The visualization does confirm my R^2 that Deli products are quite uncorrelated with other products as the purchase of deli does not really increase with the purchase of other features and neither it decreases with the purchase of other features. It's just relatively low, presumably because most retailers don't have a deli department, or even if they have one, it's a small one. The data does not seem to be normally distributed, rather it's mostly left skewed and the median falls below the mean. Data Preprocessing In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful. Implementation: Feature Scaling If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm. In the code block below, you will need to implement the following: - Assign a copy of the data to log_data after applying a logarithm scaling. Use the np.log function for this. - Assign a copy of the sample data to log_samples after applying a logrithm scaling. Again, use np.log. End of explanation """ # Display the log-transformed sample data display(log_samples) """ Explanation: Observation After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before). Run the code below to see how the sample data has changed after having the natural logarithm applied to it. End of explanation """ # For Counting how many times each indices appear as outliers frequent_outlier_indices = { } # For each feature find the data points with extreme high or low values for feature in log_data.keys(): # TODO: Calculate Q1 (25th percentile of the data) for the given feature Q1 = np.percentile(log_data[feature],25) # TODO: Calculate Q3 (75th percentile of the data) for the given feature Q3 = np.percentile(log_data[feature],75) # TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range) step = 1.5*(Q3-Q1) # Display the outliers print "Data points considered outliers for the feature '{}':".format(feature) outlier_dataframe = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))] display(outlier_dataframe) outlier_index_list = list(outlier_dataframe.index.values) for index in outlier_index_list: if index in frequent_outlier_indices: frequent_outlier_indices[index]+=1 else: frequent_outlier_indices[index]=1 #Only keep indices which occur more than once frequent_outlier_indices = {index:value for index,value in frequent_outlier_indices.iteritems() if value>1} print frequent_outlier_indices # OPTIONAL: Select the indices for data points you wish to remove outliers= [key for key in frequent_outlier_indices] display(data.ix[outliers]) # Remove the outliers, if any were specified good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True) print data.shape print good_data.shape """ Explanation: Implementation: Outlier Detection Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal. In the code block below, you will need to implement the following: - Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this. - Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile. - Assign the calculation of an outlier step for the given feature to step. - Optionally remove data points from the dataset by adding indices to the outliers list. NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points! Once you have performed this implementation, the dataset will be stored in the variable good_data. End of explanation """ from sklearn.decomposition import PCA # TODO: Apply PCA by fitting the good data with the same number of dimensions as features n_features = len(good_data.columns) pca = PCA(n_components = n_features) pca.fit(good_data) # TODO: Transform the sample log-data using the PCA fit above pca_samples = pca.transform(log_samples) # Generate PCA results plot pca_results = rs.pca_results(good_data, pca) """ Explanation: Question 4 Are there any data points considered outliers for more than one feature? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why. Answer: Are there any data points considered outliers for more than one feature? Yes, the indices 128,65,75,66,154 was considered for more than one feature. The data frame for these indices can be seen above. Should these data points be removed from the dataset?If any data points were added to the outliers list to be removed, explain why. I decided to remove all the data points from this data set because these outliers may skew the direction of the ordered principal components which should be pointed at the direction of maximum variance of the data.PCA tries to minimize the information loss(measured by the distance from the points to their new projected spots in the new feature), however these outliers add noise to the dataset and skew the variance so the PCA wouldn't be able to output the principal components which would point to the direction of maximum variance. If we try to work on clustering on this dataset these outliers might end up in their own clusters of one-two points which wiil not be informative as their dissimilarity(measured by average distance of the means or some other distance based metric) will be too big compared to other data points. To be clear, it's possible that removing points that has only one feature as outlier may skew the variance, but may be it'd be a good idea to iterate the similar experiments with only a few of the points removed and checking how that works in future. We can iteratively remove outliers one by one and check our results in a future experiment. Feature Transformation In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers. Implementation: PCA Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data. In the code block below, you will need to implement the following: - Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca. - Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples. End of explanation """ # Display sample log-data after having a PCA transformation applied display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values)) """ Explanation: Question 5 How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending. Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights. Answer <i>How much variance in the data is explained by the first and second principal component?</i> 0.4430 + 0.2638 = 0.7068 or 70.68% variance in data is explained by the first and second principal component. <i>What about the first four principal components?</i> (0.4430 + 0.2638 + 0.1231 + 0.1012) = 0.9319 or 93.11% of the total variance in data is explained by the first four principal components. <i>Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.</i> The first principal component shows strong positive correlation with detergents, grocery and milk, small positive correlation with deli and low negative correlation with fresh and frozen. This shows some consumers who spend a lot on detergents, also tend to spend a lot on grocery and milk, and slightly on deli , however they spend less on fresh and frozen food. This may represent retailers who don't focus much on fresh or frozen food distribution, but generally distributes grocery and milk.It can also be hotels which mostly buy groceries and milk for producing meals, which may explain why there's a lack of frozen food buying. The second principal component shows strong positive correlation with fresh, frozen and deli and slightly positive correlation with the other three. It apprears that this group is absolutely different from the first component as in the first component fresh and frozen spending was relatively low. It shows that some consumers tend to spend a lot on fresh, frozen and deli products, however they might also buy some detergent, grocery and milk at the same time. This is probably a retailer focusing on selling fresh, frozen and deli products. Deli products is somewhat unusual still to sale and a retailer who'd focus on selling deli, may pick up the idea that many consumers like fresh, organic products while some consumers buy frozen food for convinience and buy them for reselling. The third principal component shows strong positive correlation with deli products and moderate positve correlation with frozen products, but it also shows strong negative correlation with fresh and detergents. It seems that some consumers who spend a lot on deli products, may spend a lot on frozen products too, but they will spend a lot less on fresh and detergents. This can represent deli restaurants, because retailers and hotels might buy more detergents which would end up showing a positive correlation with that feature. The fourth principal component shows a very strong positive correlation with frozen feature and a small positive correlation with detergents, but it has strong negative correlation with deli products and small negative correlation with fresh feature. So, it seems some consumers who spend a lot on frozen product and a little bit on detergents spends a lot less on the deli products and slightly less on fresh products. It may explain the difference between supermarkets/retailers with a deli department who spend a lot on frozen foods and other items vs the one's who don't have a deli department. It's probably a retailer who sales a lot of frozen products, some grocery and other products but does not sale deli or fresh products at all. Resource used : https://onlinecourses.science.psu.edu/stat505/node/54 Observation Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points. End of explanation """ # TODO: Apply PCA by fitting the good data with only two dimensions pca = PCA(n_components = 2) pca.fit(good_data) # TODO: Transform the good data using the PCA fit above reduced_data = pca.transform(good_data) # TODO: Transform the sample log-data using the PCA fit above pca_samples = pca.transform(log_samples) # Create a DataFrame for the reduced data reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2']) """ Explanation: Implementation: Dimensionality Reduction When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards. In the code block below, you will need to implement the following: - Assign the results of fitting PCA in two dimensions with good_data to pca. - Apply a PCA transformation of good_data using pca.transform, and assign the reuslts to reduced_data. - Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples. End of explanation """ # Display sample log-data after applying PCA transformation in two dimensions display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2'])) """ Explanation: Observation Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions. End of explanation """ from sklearn.mixture import GMM from sklearn.metrics import silhouette_score scores = [] for n in xrange(2,11): # TODO: Apply your clustering algorithm of choice to the reduced data clusterer = GMM(n_components = n,random_state = 0) clusterer.fit(reduced_data) # TODO: Predict the cluster for each data point preds = clusterer.predict(reduced_data) # TODO: Find the cluster centers centers = clusterer.means_ # TODO: Predict the cluster for each transformed sample data point sample_preds = clusterer.predict(pca_samples) # TODO: Calculate the mean silhouette coefficient for the number of clusters chosen score = silhouette_score(reduced_data,preds) scores.append(score) print "For cluster number "+ str(n) + " the score is " + str(score) import matplotlib.pyplot as plt components = list(xrange(2,11)) plt.plot(components,scores) plt.xlabel("Number of clusters") plt.ylabel("Silhoutte score") """ Explanation: Clustering In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. Question 6 What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why? Answer: K means clustering is a hard clustering method(each instance gets assigned to only one of the clusters instead of generating probabilities for each cluster). It's easy to implement, simple to understand and depending on the domain by selecting good distance metric(for measuring similarity) practioners can try many different variants of k-means to get good results. Gaussian Mixture Models is a soft clustering technique(for each instance we generate the probability of it belonging to the different classes). We assume the data points are sourced from a mixture of finite number of gaussian distributions with unknown parameters and for each instance we figure out the probability of it's being from different gaussians after estimating those parameters. It assumes a point can be shared by two clusters, which is often more realistic as an assumption as many instances will have shared characteristics. Also by getting the probabilities it's easier to see the boundary cases, instances which could have belonged to either clusters which is more useful for industry decision making. I think I'll use the Gaussian mixture model because if the distributor gets the boundary cases as well as the probability of each customer belonging to different clusters, it might be more helpful to segment them, they may decide to take the boundary cases to the closest cluster, but at least it'd be an informed decision. K means is also a special case of Gaussian Mixture models so I'm assuming if the datapoint falls clearly into a cluster and there's no boundary cases GMM will pick it up, while Kmeans will hard assign the instance to some cluster, so it's a good idea to check for edge cases given the business revenue will be impacted by the segmentation methods in future. Resources used : http://scikit-learn.org/stable/modules/mixture.html#gmm-classifier , https://www.quora.com/What-is-the-difference-between-K-means-and-the-mixture-model-of-Gaussian , https://www.quora.com/What-is-an-intuitive-explanation-of-Gaussian-mixture-models Implementation: Creating Clusters Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering. In the code block below, you will need to implement the following: - Fit a clustering algorithm to the reduced_data and assign it to clusterer. - Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds. - Find the cluster centers using the algorithm's respective attribute and assign them to centers. - Predict the cluster for each sample data point in pca_samples and assign them sample_preds. - Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds. - Assign the silhouette score to score and print the result. End of explanation """ # TODO: Apply your clustering algorithm of choice to the reduced data clusterer = GMM(n_components = 2,random_state = 0) clusterer.fit(reduced_data) # TODO: Predict the cluster for each data point preds = clusterer.predict(reduced_data) # TODO: Find the cluster centers centers = clusterer.means_ # TODO: Predict the cluster for each transformed sample data point sample_preds = clusterer.predict(pca_samples) # TODO: Calculate the mean silhouette coefficient for the number of clusters chosen score = silhouette_score(reduced_data,preds) """ Explanation: Question 7 Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score? Answer: Scores for the clusterer numbers : For cluster number 2 the score is 0.411818864386 For cluster number 3 the score is 0.373560747175 For cluster number 4 the score is 0.308243479507 For cluster number 5 the score is 0.295441470747 For cluster number 6 the score is 0.276478936811 For cluster number 7 the score is 0.323119845936 For cluster number 8 the score is 0.3120673235 For cluster number 9 the score is 0.290997808766 For cluster number 10 the score is 0.311964697843 Cluster number 2 has the best silhoutte score so I'm choosing it for the final model. End of explanation """ # Display the results of the clustering from implementation rs.cluster_results(reduced_data, preds, centers, pca_samples) """ Explanation: Cluster Visualization Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters. End of explanation """ # TODO: Inverse transform the centers log_centers = pca.inverse_transform(centers) # TODO: Exponentiate the centers true_centers = np.exp(log_centers) # Display the true centers segments = ['Segment {}'.format(i) for i in range(0,len(centers))] true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys()) true_centers.index = segments display(true_centers) # Display a description of the dataset display(data.describe()) """ Explanation: Implementation: Data Recovery Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations. In the code block below, you will need to implement the following: - Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers. - Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers. End of explanation """ true_centers = true_centers.append(data.describe().loc['mean']) _ = true_centers.plot(kind='bar', figsize=(15,6)) """ Explanation: Question 8 Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent? Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'. End of explanation """ # Display the predictions for i, pred in enumerate(sample_preds): print "Sample point", i, "predicted to be in Cluster", pred """ Explanation: Answer: I think the segments basically represent two sorts of customers, customers who spend below average on every feature(segment 0) and the customers who spend near average or above average on the features available(segment 1). I believe segment 0 represents smaller consumers such as restaurants or cafe's, while the segment 1 generally represents big establishments such as supermarkets or retailers. The retailers may differ from each other on the basis of spending differently on frozen or deli products, but they spend higher than average on the fresh, grocery and milk compared to the segment 0 which are smaller establishments. As we see from the visualization, average customers in segment 0 has below the mean spending on all features while the average customer spending in segment 1 looks much similar to the mean spending. Given the data is left skewed, it is expected there is a subgroup of consumers with high spending on all features. Question 9 For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this? Run the code block below to find which cluster each sample point is predicted to be. End of explanation """ # Display the clustering results based on 'Channel' data rs.channel_results(reduced_data, outliers, pca_samples) """ Explanation: Answer: Past result : This is my first take on the samples (from above) Sample 0 has average spending on Fresh food, slightly above average on milk and grocery, similar to average spending on Frozen but unusually high spending on detergent paper, it could be a hotel with many rooms which require a lot of cleaning supplies but has an in-house restaurant , or it could be a retailer which supplies cleaning products to other smaller establishments. Sample 1 has above average spending on all the features except for Detergents, but spends the most on Delicatessen, this could be a large supermarket with a deli department. Or it could also be a big deli restaurant given the low spending on detergents. Clearly they are expecting people to buy food related products compared to cleaning supplies. Sample 2 has much below average spending on fresh product,relatively high spending on Milk and grocery, unusually low spending on Frozen and Delicatessen and pretty high spending on Detergents. Given high spenidng on Milk, grocery and Detergents I think it's a small supermarket with a bakery department. Current interpretation : I assumed my samples differ significantly from each other because they spent relatively very high or low on one feature, such as detergent paper or deli products or milk. However the clustering predicts all of them are from segment one i.e all of them are retailers. I think this is actually consistent with my hypotheses that all of them are some variations of hotels or supermarket. And it appears that the clustering is just segmenting the smaller establishments from the bigger ones which can actually be useful in practice. It's true that I assumed sample 0 is different from sample 1,2 because it spent a lot on detergent products, I also assumed sample 1 is different from sample 0 and 2 because it spent a lot on deli products. Sample 2 had a lot below average spending on the fresh product, but it's also possible it's just a recording error. Even with relative differences, on the whole they were consumers who spent quite high on all the features compared to smaller establishments. Perhaps with different samples we'd be able to take a look at the custoemrs from segment 0, but overall I think the results are consistent. I also think choosing samples who spend a lot on some dimension was a good choice still because now I know after the clustering that even if some consumers differ a lot from each other in superficial aspects, such as high spending on one dimension, they still differ from the smaller establishments when it comes to average spending on all the features. Conclusion Question 10 Companies often run A/B tests when making small changes to their products or services. If the wholesale distributor wanted to change its delivery service from 5 days a week to 3 days a week, how would you use the structure of the data to help them decide on a group of customers to test? Hint: Would such a change in the delivery service affect all customers equally? How could the distributor identify who it affects the most? Answer: For each segment we can randomly divide the customers into two groups, control(group A) and variation group(group B). Group A will get the usual 5 days a week service while the group B will get the 3 days a week service. My assumption is that the consumers from segment 0(the smaller establishments) may be comfortable with the 3 days a week service, while the customers from segment 1 (big retailers/hotels) will probably need 5 days a week service for frequent refills, but only after comparing the feedback from the control and variation groups of each segment we can decide. Given customers in a segment are similar to each other, the distributor should compare the feedbacks from the control vs variation group between a segment, and how each segment differs from each other as a whole too given larger establishments may not be comfortable with the 3 days service and might be prone to switching. For getting feedback on the change, we can use survey questions or track the purchasing behavior of the control vs the variation groups. Question 11 Assume the wholesale distributor wanted to predict a new feature for each customer based on the purchasing information available. How could the wholesale distributor use the structure of the clustering data you've found to assist a supervised learning analysis? Hint: What other input feature could the supervised learner use besides the six product features to help make a prediction? Answer: Besides the six product features, perhaps whether a customer is from segment 1 or 0, e.g (smaller or bigger establishment/below average vs equal/higher than average spending) can be used as a feature and in fact it may be a very important categorical variable. Visualizing Underlying Distributions At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier on to the original dataset. Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling. End of explanation """
scheib/chromium
third_party/tensorflow-text/src/docs/tutorials/bert_glue.ipynb
bsd-3-clause
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Hub Authors. End of explanation """ !pip install -q -U tensorflow-text """ Explanation: <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/text/tutorials/bert_glue"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/bert_glue.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/bert_glue.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/bert_glue.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/google/collections/bert/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a> </td> </table> Solve GLUE tasks using BERT on TPU BERT can be used to solve many problems in natural language processing. You will learn how to fine-tune BERT for many tasks from the GLUE benchmark: CoLA (Corpus of Linguistic Acceptability): Is the sentence grammatically correct? SST-2 (Stanford Sentiment Treebank): The task is to predict the sentiment of a given sentence. MRPC (Microsoft Research Paraphrase Corpus): Determine whether a pair of sentences are semantically equivalent. QQP (Quora Question Pairs2): Determine whether a pair of questions are semantically equivalent. MNLI (Multi-Genre Natural Language Inference): Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). QNLI(Question-answering Natural Language Inference): The task is to determine whether the context sentence contains the answer to the question. RTE(Recognizing Textual Entailment): Determine if a sentence entails a given hypothesis or not. WNLI(Winograd Natural Language Inference): The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. This tutorial contains complete end-to-end code to train these models on a TPU. You can also run this notebook on a GPU, by changing one line (described below). In this notebook, you will: Load a BERT model from TensorFlow Hub Choose one of GLUE tasks and download the dataset Preprocess the text Fine-tune BERT (examples are given for single-sentence and multi-sentence datasets) Save the trained model and use it Key point: The model you develop will be end-to-end. The preprocessing logic will be included in the model itself, making it capable of accepting raw strings as input. Note: This notebook should be run using a TPU. In Colab, choose Runtime -> Change runtime type and verify that a TPU is selected. Setup You will use a separate model to preprocess text before using it to fine-tune BERT. This model depends on tensorflow/text, which you will install below. End of explanation """ !pip install -q -U tf-models-official !pip install -U tfds-nightly import os import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds import tensorflow_text as text # A dependency of the preprocessing model import tensorflow_addons as tfa from official.nlp import optimization import numpy as np tf.get_logger().setLevel('ERROR') """ Explanation: You will use the AdamW optimizer from tensorflow/models to fine-tune BERT, which you will install as well. End of explanation """ os.environ["TFHUB_MODEL_LOAD_FORMAT"]="UNCOMPRESSED" """ Explanation: Next, configure TFHub to read checkpoints directly from TFHub's Cloud Storage buckets. This is only recommended when running TFHub models on TPU. Without this setting TFHub would download the compressed file and extract the checkpoint locally. Attempting to load from these local files will fail with the following error: InvalidArgumentError: Unimplemented: File system scheme '[local]' not implemented This is because the TPU can only read directly from Cloud Storage buckets. Note: This setting is automatic in Colab. End of explanation """ import os if os.environ['COLAB_TPU_ADDR']: cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(cluster_resolver) tf.tpu.experimental.initialize_tpu_system(cluster_resolver) strategy = tf.distribute.TPUStrategy(cluster_resolver) print('Using TPU') elif tf.config.list_physical_devices('GPU'): strategy = tf.distribute.MirroredStrategy() print('Using GPU') else: raise ValueError('Running on CPU is not recommended.') """ Explanation: Connect to the TPU worker The following code connects to the TPU worker and changes TensorFlow's default device to the CPU device on the TPU worker. It also defines a TPU distribution strategy that you will use to distribute model training onto the 8 separate TPU cores available on this one TPU worker. See TensorFlow's TPU guide for more information. End of explanation """ #@title Choose a BERT model to fine-tune bert_model_name = 'bert_en_uncased_L-12_H-768_A-12' #@param ["bert_en_uncased_L-12_H-768_A-12", "bert_en_uncased_L-24_H-1024_A-16", "bert_en_wwm_uncased_L-24_H-1024_A-16", "bert_en_cased_L-12_H-768_A-12", "bert_en_cased_L-24_H-1024_A-16", "bert_en_wwm_cased_L-24_H-1024_A-16", "bert_multi_cased_L-12_H-768_A-12", "small_bert/bert_en_uncased_L-2_H-128_A-2", "small_bert/bert_en_uncased_L-2_H-256_A-4", "small_bert/bert_en_uncased_L-2_H-512_A-8", "small_bert/bert_en_uncased_L-2_H-768_A-12", "small_bert/bert_en_uncased_L-4_H-128_A-2", "small_bert/bert_en_uncased_L-4_H-256_A-4", "small_bert/bert_en_uncased_L-4_H-512_A-8", "small_bert/bert_en_uncased_L-4_H-768_A-12", "small_bert/bert_en_uncased_L-6_H-128_A-2", "small_bert/bert_en_uncased_L-6_H-256_A-4", "small_bert/bert_en_uncased_L-6_H-512_A-8", "small_bert/bert_en_uncased_L-6_H-768_A-12", "small_bert/bert_en_uncased_L-8_H-128_A-2", "small_bert/bert_en_uncased_L-8_H-256_A-4", "small_bert/bert_en_uncased_L-8_H-512_A-8", "small_bert/bert_en_uncased_L-8_H-768_A-12", "small_bert/bert_en_uncased_L-10_H-128_A-2", "small_bert/bert_en_uncased_L-10_H-256_A-4", "small_bert/bert_en_uncased_L-10_H-512_A-8", "small_bert/bert_en_uncased_L-10_H-768_A-12", "small_bert/bert_en_uncased_L-12_H-128_A-2", "small_bert/bert_en_uncased_L-12_H-256_A-4", "small_bert/bert_en_uncased_L-12_H-512_A-8", "small_bert/bert_en_uncased_L-12_H-768_A-12", "albert_en_base", "albert_en_large", "albert_en_xlarge", "albert_en_xxlarge", "electra_small", "electra_base", "experts_pubmed", "experts_wiki_books", "talking-heads_base", "talking-heads_large"] map_name_to_handle = { 'bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3', 'bert_en_uncased_L-24_H-1024_A-16': 'https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/3', 'bert_en_wwm_uncased_L-24_H-1024_A-16': 'https://tfhub.dev/tensorflow/bert_en_wwm_uncased_L-24_H-1024_A-16/3', 'bert_en_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3', 'bert_en_cased_L-24_H-1024_A-16': 'https://tfhub.dev/tensorflow/bert_en_cased_L-24_H-1024_A-16/3', 'bert_en_wwm_cased_L-24_H-1024_A-16': 'https://tfhub.dev/tensorflow/bert_en_wwm_cased_L-24_H-1024_A-16/3', 'bert_multi_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3', 'small_bert/bert_en_uncased_L-2_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1', 'small_bert/bert_en_uncased_L-2_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1', 'small_bert/bert_en_uncased_L-2_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1', 'small_bert/bert_en_uncased_L-2_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1', 'small_bert/bert_en_uncased_L-4_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1', 'small_bert/bert_en_uncased_L-4_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1', 'small_bert/bert_en_uncased_L-4_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1', 'small_bert/bert_en_uncased_L-4_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1', 'small_bert/bert_en_uncased_L-6_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1', 'small_bert/bert_en_uncased_L-6_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1', 'small_bert/bert_en_uncased_L-6_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1', 'small_bert/bert_en_uncased_L-6_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1', 'small_bert/bert_en_uncased_L-8_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1', 'small_bert/bert_en_uncased_L-8_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1', 'small_bert/bert_en_uncased_L-8_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1', 'small_bert/bert_en_uncased_L-8_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1', 'small_bert/bert_en_uncased_L-10_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1', 'small_bert/bert_en_uncased_L-10_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1', 'small_bert/bert_en_uncased_L-10_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1', 'small_bert/bert_en_uncased_L-10_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1', 'small_bert/bert_en_uncased_L-12_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1', 'small_bert/bert_en_uncased_L-12_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1', 'small_bert/bert_en_uncased_L-12_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1', 'small_bert/bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1', 'albert_en_base': 'https://tfhub.dev/tensorflow/albert_en_base/2', 'albert_en_large': 'https://tfhub.dev/tensorflow/albert_en_large/2', 'albert_en_xlarge': 'https://tfhub.dev/tensorflow/albert_en_xlarge/2', 'albert_en_xxlarge': 'https://tfhub.dev/tensorflow/albert_en_xxlarge/2', 'electra_small': 'https://tfhub.dev/google/electra_small/2', 'electra_base': 'https://tfhub.dev/google/electra_base/2', 'experts_pubmed': 'https://tfhub.dev/google/experts/bert/pubmed/2', 'experts_wiki_books': 'https://tfhub.dev/google/experts/bert/wiki_books/2', 'talking-heads_base': 'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1', 'talking-heads_large': 'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_large/1', } map_model_to_preprocess = { 'bert_en_uncased_L-24_H-1024_A-16': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'bert_en_wwm_cased_L-24_H-1024_A-16': 'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3', 'bert_en_cased_L-24_H-1024_A-16': 'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3', 'bert_en_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3', 'bert_en_wwm_uncased_L-24_H-1024_A-16': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'bert_multi_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3', 'albert_en_base': 'https://tfhub.dev/tensorflow/albert_en_preprocess/3', 'albert_en_large': 'https://tfhub.dev/tensorflow/albert_en_preprocess/3', 'albert_en_xlarge': 'https://tfhub.dev/tensorflow/albert_en_preprocess/3', 'albert_en_xxlarge': 'https://tfhub.dev/tensorflow/albert_en_preprocess/3', 'electra_small': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'electra_base': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'experts_pubmed': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'experts_wiki_books': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'talking-heads_base': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'talking-heads_large': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', } tfhub_handle_encoder = map_name_to_handle[bert_model_name] tfhub_handle_preprocess = map_model_to_preprocess[bert_model_name] print('BERT model selected :', tfhub_handle_encoder) print('Preprocessing model auto-selected:', tfhub_handle_preprocess) """ Explanation: Loading models from TensorFlow Hub Here you can choose which BERT model you will load from TensorFlow Hub and fine-tune. There are multiple BERT models available to choose from. BERT-Base, Uncased and seven more models with trained weights released by the original BERT authors. Small BERTs have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality. ALBERT: four different sizes of "A Lite BERT" that reduces model size (but not computation time) by sharing parameters between layers. BERT Experts: eight models that all have the BERT-base architecture but offer a choice between different pre-training domains, to align more closely with the target task. Electra has the same architecture as BERT (in three different sizes), but gets pre-trained as a discriminator in a set-up that resembles a Generative Adversarial Network (GAN). BERT with Talking-Heads Attention and Gated GELU [base, large] has two improvements to the core of the Transformer architecture. See the model documentation linked above for more details. In this tutorial, you will start with BERT-base. You can use larger and more recent models for higher accuracy, or smaller models for faster training times. To change the model, you only need to switch a single line of code (shown below). All the differences are encapsulated in the SavedModel you will download from TensorFlow Hub. End of explanation """ bert_preprocess = hub.load(tfhub_handle_preprocess) tok = bert_preprocess.tokenize(tf.constant(['Hello TensorFlow!'])) print(tok) """ Explanation: Preprocess the text On the Classify text with BERT colab the preprocessing model is used directly embedded with the BERT encoder. This tutorial demonstrates how to do preprocessing as part of your input pipeline for training, using Dataset.map, and then merge it into the model that gets exported for inference. That way, both training and inference can work from raw text inputs, although the TPU itself requires numeric inputs. TPU requirements aside, it can help performance have preprocessing done asynchronously in an input pipeline (you can learn more in the tf.data performance guide). This tutorial also demonstrates how to build multi-input models, and how to adjust the sequence length of the inputs to BERT. Let's demonstrate the preprocessing model. End of explanation """ text_preprocessed = bert_preprocess.bert_pack_inputs([tok, tok], tf.constant(20)) print('Shape Word Ids : ', text_preprocessed['input_word_ids'].shape) print('Word Ids : ', text_preprocessed['input_word_ids'][0, :16]) print('Shape Mask : ', text_preprocessed['input_mask'].shape) print('Input Mask : ', text_preprocessed['input_mask'][0, :16]) print('Shape Type Ids : ', text_preprocessed['input_type_ids'].shape) print('Type Ids : ', text_preprocessed['input_type_ids'][0, :16]) """ Explanation: Each preprocessing model also provides a method, .bert_pack_inputs(tensors, seq_length), which takes a list of tokens (like tok above) and a sequence length argument. This packs the inputs to create a dictionary of tensors in the format expected by the BERT model. End of explanation """ def make_bert_preprocess_model(sentence_features, seq_length=128): """Returns Model mapping string features to BERT inputs. Args: sentence_features: a list with the names of string-valued features. seq_length: an integer that defines the sequence length of BERT inputs. Returns: A Keras Model that can be called on a list or dict of string Tensors (with the order or names, resp., given by sentence_features) and returns a dict of tensors for input to BERT. """ input_segments = [ tf.keras.layers.Input(shape=(), dtype=tf.string, name=ft) for ft in sentence_features] # Tokenize the text to word pieces. bert_preprocess = hub.load(tfhub_handle_preprocess) tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name='tokenizer') segments = [tokenizer(s) for s in input_segments] # Optional: Trim segments in a smart way to fit seq_length. # Simple cases (like this example) can skip this step and let # the next step apply a default truncation to approximately equal lengths. truncated_segments = segments # Pack inputs. The details (start/end token ids, dict of output tensors) # are model-dependent, so this gets loaded from the SavedModel. packer = hub.KerasLayer(bert_preprocess.bert_pack_inputs, arguments=dict(seq_length=seq_length), name='packer') model_inputs = packer(truncated_segments) return tf.keras.Model(input_segments, model_inputs) """ Explanation: Here are some details to pay attention to: - input_mask The mask allows the model to cleanly differentiate between the content and the padding. The mask has the same shape as the input_word_ids, and contains a 1 anywhere the input_word_ids is not padding. - input_type_ids has the same shape as input_mask, but inside the non-padded region, contains a 0 or a 1 indicating which sentence the token is a part of. Next, you will create a preprocessing model that encapsulates all this logic. Your model will take strings as input, and return appropriately formatted objects which can be passed to BERT. Each BERT model has a specific preprocessing model, make sure to use the proper one described on the BERT's model documentation. Note: BERT adds a "position embedding" to the token embedding of each input, and these come from a fixed-size lookup table. That imposes a max seq length of 512 (which is also a practical limit, due to the quadratic growth of attention computation). For this Colab 128 is good enough. End of explanation """ test_preprocess_model = make_bert_preprocess_model(['my_input1', 'my_input2']) test_text = [np.array(['some random test sentence']), np.array(['another sentence'])] text_preprocessed = test_preprocess_model(test_text) print('Keys : ', list(text_preprocessed.keys())) print('Shape Word Ids : ', text_preprocessed['input_word_ids'].shape) print('Word Ids : ', text_preprocessed['input_word_ids'][0, :16]) print('Shape Mask : ', text_preprocessed['input_mask'].shape) print('Input Mask : ', text_preprocessed['input_mask'][0, :16]) print('Shape Type Ids : ', text_preprocessed['input_type_ids'].shape) print('Type Ids : ', text_preprocessed['input_type_ids'][0, :16]) """ Explanation: Let's demonstrate the preprocessing model. You will create a test with two sentences input (input1 and input2). The output is what a BERT model would expect as input: input_word_ids, input_masks and input_type_ids. End of explanation """ tf.keras.utils.plot_model(test_preprocess_model, show_shapes=True, show_dtype=True) """ Explanation: Let's take a look at the model's structure, paying attention to the two inputs you just defined. End of explanation """ AUTOTUNE = tf.data.AUTOTUNE def load_dataset_from_tfds(in_memory_ds, info, split, batch_size, bert_preprocess_model): is_training = split.startswith('train') dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[split]) num_examples = info.splits[split].num_examples if is_training: dataset = dataset.shuffle(num_examples) dataset = dataset.repeat() dataset = dataset.batch(batch_size) dataset = dataset.map(lambda ex: (bert_preprocess_model(ex), ex['label'])) dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE) return dataset, num_examples """ Explanation: To apply the preprocessing in all the inputs from the dataset, you will use the map function from the dataset. The result is then cached for performance. End of explanation """ def build_classifier_model(num_classes): class Classifier(tf.keras.Model): def __init__(self, num_classes): super(Classifier, self).__init__(name="prediction") self.encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True) self.dropout = tf.keras.layers.Dropout(0.1) self.dense = tf.keras.layers.Dense(num_classes) def call(self, preprocessed_text): encoder_outputs = self.encoder(preprocessed_text) pooled_output = encoder_outputs["pooled_output"] x = self.dropout(pooled_output) x = self.dense(x) return x model = Classifier(num_classes) return model """ Explanation: Define your model You are now ready to define your model for sentence or sentence pair classification by feeding the preprocessed inputs through the BERT encoder and putting a linear classifier on top (or other arrangement of layers as you prefer), and using dropout for regularization. End of explanation """ test_classifier_model = build_classifier_model(2) bert_raw_result = test_classifier_model(text_preprocessed) print(tf.sigmoid(bert_raw_result)) """ Explanation: Let's try running the model on some preprocessed inputs. End of explanation """ tfds_name = 'glue/cola' #@param ['glue/cola', 'glue/sst2', 'glue/mrpc', 'glue/qqp', 'glue/mnli', 'glue/qnli', 'glue/rte', 'glue/wnli'] tfds_info = tfds.builder(tfds_name).info sentence_features = list(tfds_info.features.keys()) sentence_features.remove('idx') sentence_features.remove('label') available_splits = list(tfds_info.splits.keys()) train_split = 'train' validation_split = 'validation' test_split = 'test' if tfds_name == 'glue/mnli': validation_split = 'validation_matched' test_split = 'test_matched' num_classes = tfds_info.features['label'].num_classes num_examples = tfds_info.splits.total_num_examples print(f'Using {tfds_name} from TFDS') print(f'This dataset has {num_examples} examples') print(f'Number of classes: {num_classes}') print(f'Features {sentence_features}') print(f'Splits {available_splits}') with tf.device('/job:localhost'): # batch_size=-1 is a way to load the dataset into memory in_memory_ds = tfds.load(tfds_name, batch_size=-1, shuffle_files=True) # The code below is just to show some samples from the selected dataset print(f'Here are some sample rows from {tfds_name} dataset') sample_dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[train_split]) labels_names = tfds_info.features['label'].names print(labels_names) print() sample_i = 1 for sample_row in sample_dataset.take(5): samples = [sample_row[feature] for feature in sentence_features] print(f'sample row {sample_i}') for sample in samples: print(sample.numpy()) sample_label = sample_row['label'] print(f'label: {sample_label} ({labels_names[sample_label]})') print() sample_i += 1 """ Explanation: Choose a task from GLUE You are going to use a TensorFlow DataSet from the GLUE benchmark suite. Colab lets you download these small datasets to the local filesystem, and the code below reads them entirely into memory, because the separate TPU worker host cannot access the local filesystem of the colab runtime. For bigger datasets, you'll need to create your own Google Cloud Storage bucket and have the TPU worker read the data from there. You can learn more in the TPU guide. It's recommended to start with the CoLa dataset (for single sentence) or MRPC (for multi sentence) since these are small and don't take long to fine tune. End of explanation """ def get_configuration(glue_task): loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) if glue_task == 'glue/cola': metrics = tfa.metrics.MatthewsCorrelationCoefficient(num_classes=2) else: metrics = tf.keras.metrics.SparseCategoricalAccuracy( 'accuracy', dtype=tf.float32) return metrics, loss """ Explanation: The dataset also determines the problem type (classification or regression) and the appropriate loss function for training. End of explanation """ epochs = 3 batch_size = 32 init_lr = 2e-5 print(f'Fine tuning {tfhub_handle_encoder} model') bert_preprocess_model = make_bert_preprocess_model(sentence_features) with strategy.scope(): # metric have to be created inside the strategy scope metrics, loss = get_configuration(tfds_name) train_dataset, train_data_size = load_dataset_from_tfds( in_memory_ds, tfds_info, train_split, batch_size, bert_preprocess_model) steps_per_epoch = train_data_size // batch_size num_train_steps = steps_per_epoch * epochs num_warmup_steps = num_train_steps // 10 validation_dataset, validation_data_size = load_dataset_from_tfds( in_memory_ds, tfds_info, validation_split, batch_size, bert_preprocess_model) validation_steps = validation_data_size // batch_size classifier_model = build_classifier_model(num_classes) optimizer = optimization.create_optimizer( init_lr=init_lr, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, optimizer_type='adamw') classifier_model.compile(optimizer=optimizer, loss=loss, metrics=[metrics]) classifier_model.fit( x=train_dataset, validation_data=validation_dataset, steps_per_epoch=steps_per_epoch, epochs=epochs, validation_steps=validation_steps) """ Explanation: Train your model Finally, you can train the model end-to-end on the dataset you chose. Distribution Recall the set-up code at the top, which has connected the colab runtime to a TPU worker with multiple TPU devices. To distribute training onto them, you will create and compile your main Keras model within the scope of the TPU distribution strategy. (For details, see Distributed training with Keras.) Preprocessing, on the other hand, runs on the CPU of the worker host, not the TPUs, so the Keras model for preprocessing as well as the training and validation datasets mapped with it are built outside the distribution strategy scope. The call to Model.fit() will take care of distributing the passed-in dataset to the model replicas. Note: The single TPU worker host already has the resource objects (think: a lookup table) needed for tokenization. Scaling up to multiple workers requires use of Strategy.experimental_distribute_datasets_from_function with a function that loads the preprocessing model separately onto each worker. Optimizer Fine-tuning follows the optimizer set-up from BERT pre-training (as in Classify text with BERT): It uses the AdamW optimizer with a linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (num_warmup_steps). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5). End of explanation """ main_save_path = './my_models' bert_type = tfhub_handle_encoder.split('/')[-2] saved_model_name = f'{tfds_name.replace("/", "_")}_{bert_type}' saved_model_path = os.path.join(main_save_path, saved_model_name) preprocess_inputs = bert_preprocess_model.inputs bert_encoder_inputs = bert_preprocess_model(preprocess_inputs) bert_outputs = classifier_model(bert_encoder_inputs) model_for_export = tf.keras.Model(preprocess_inputs, bert_outputs) print('Saving', saved_model_path) # Save everything on the Colab host (even the variables from TPU memory) save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost') model_for_export.save(saved_model_path, include_optimizer=False, options=save_options) """ Explanation: Export for inference You will create a final model that has the preprocessing part and the fine-tuned BERT we've just created. At inference time, preprocessing needs to be part of the model (because there is no longer a separate input queue as for training data that does it). Preprocessing is not just computation; it has its own resources (the vocab table) that must be attached to the Keras Model that is saved for export. This final assembly is what will be saved. You are going to save the model on colab and later you can download to keep it for the future (View -> Table of contents -> Files). End of explanation """ with tf.device('/job:localhost'): reloaded_model = tf.saved_model.load(saved_model_path) #@title Utility methods def prepare(record): model_inputs = [[record[ft]] for ft in sentence_features] return model_inputs def prepare_serving(record): model_inputs = {ft: record[ft] for ft in sentence_features} return model_inputs def print_bert_results(test, bert_result, dataset_name): bert_result_class = tf.argmax(bert_result, axis=1)[0] if dataset_name == 'glue/cola': print('sentence:', test[0].numpy()) if bert_result_class == 1: print('This sentence is acceptable') else: print('This sentence is unacceptable') elif dataset_name == 'glue/sst2': print('sentence:', test[0]) if bert_result_class == 1: print('This sentence has POSITIVE sentiment') else: print('This sentence has NEGATIVE sentiment') elif dataset_name == 'glue/mrpc': print('sentence1:', test[0]) print('sentence2:', test[1]) if bert_result_class == 1: print('Are a paraphrase') else: print('Are NOT a paraphrase') elif dataset_name == 'glue/qqp': print('question1:', test[0]) print('question2:', test[1]) if bert_result_class == 1: print('Questions are similar') else: print('Questions are NOT similar') elif dataset_name == 'glue/mnli': print('premise :', test[0]) print('hypothesis:', test[1]) if bert_result_class == 1: print('This premise is NEUTRAL to the hypothesis') elif bert_result_class == 2: print('This premise CONTRADICTS the hypothesis') else: print('This premise ENTAILS the hypothesis') elif dataset_name == 'glue/qnli': print('question:', test[0]) print('sentence:', test[1]) if bert_result_class == 1: print('The question is NOT answerable by the sentence') else: print('The question is answerable by the sentence') elif dataset_name == 'glue/rte': print('sentence1:', test[0]) print('sentence2:', test[1]) if bert_result_class == 1: print('Sentence1 DOES NOT entails sentence2') else: print('Sentence1 entails sentence2') elif dataset_name == 'glue/wnli': print('sentence1:', test[0]) print('sentence2:', test[1]) if bert_result_class == 1: print('Sentence1 DOES NOT entails sentence2') else: print('Sentence1 entails sentence2') print('BERT raw results:', bert_result[0]) print() """ Explanation: Test the model The final step is testing the results of your exported model. Just to make some comparison, let's reload the model and test it using some inputs from the test split from the dataset. Note: The test is done on the colab host, not the TPU worker that it has connected to, so it appears below with explicit device placements. You can omit those when loading the SavedModel elsewhere. End of explanation """ with tf.device('/job:localhost'): test_dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[test_split]) for test_row in test_dataset.shuffle(1000).map(prepare).take(5): if len(sentence_features) == 1: result = reloaded_model(test_row[0]) else: result = reloaded_model(list(test_row)) print_bert_results(test_row, result, tfds_name) """ Explanation: Test End of explanation """ with tf.device('/job:localhost'): serving_model = reloaded_model.signatures['serving_default'] for test_row in test_dataset.shuffle(1000).map(prepare_serving).take(5): result = serving_model(**test_row) # The 'prediction' key is the classifier's defined model name. print_bert_results(list(test_row.values()), result['prediction'], tfds_name) """ Explanation: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. Notice there are some small differences in the input. In Python, you can test them as follows: End of explanation """
littlewizardLI/Udacity-ML-nanodegrees
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
apache-2.0
''' Solution ''' import pandas as pd # Dataset from - https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection df = pd.read_table('smsspamcollection/SMSSpamCollection', sep='\t', header=None, names=['label', 'sms_message']) # Output printing out first 5 columns df.head() """ Explanation: Our Mission Spam detection is one of the major applications of Machine Learning in the interwebs today. Pretty much all of the major email service providers have spam detection systems built in and automatically classify such mail as 'Junk Mail'. In this mission we will be using the Naive Bayes algorithm to create a model that can classify 'https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection' SMS messages as spam or not spam, based on the training we give to the model. It is important to have some level of intuition as to what a spammy text message might look like. Usually they have words like 'free', 'win', 'winner', 'cash', 'prize' and the like in them as these texts are designed to catch your eye and in some sense tempt you to open them. Also, spam messages tend to have words written in all capitals and also tend to use a lot of exclamation marks. To the recipient, it is usually pretty straightforward to identify a spam text and our objective here is to train a model to do that for us! Being able to identify spam messages is a binary classification problem as messages are classified as either 'Spam' or 'Not Spam' and nothing else. Also, this is a supervised learning problem, as we will be feeding a labelled dataset into the model, that it can learn from, to make future predictions. Step 0: Introduction to the Naive Bayes Theorem Bayes theorem is one of the earliest probabilistic inference algorithms developed by Reverend Bayes (which he used to try and infer the existence of God no less) and still performs extremely well for certain use cases. It's best to understand this theorem using an example. Let's say you are a member of the Secret Service and you have been deployed to protect the Democratic presidential nominee during one of his/her campaign speeches. Being a public event that is open to all, your job is not easy and you have to be on the constant lookout for threats. So one place to start is to put a certain threat-factor for each person. So based on the features of an individual, like the age, sex, and other smaller factors like is the person carrying a bag?, does the person look nervous? etc. you can make a judgement call as to if that person is viable threat. If an individual ticks all the boxes up to a level where it crosses a threshold of doubt in your mind, you can take action and remove that person from the vicinity. The Bayes theorem works in the same way as we are computing the probability of an event(a person being a threat) based on the probabilities of certain related events(age, sex, presence of bag or not, nervousness etc. of the person). One thing to consider is the independence of these features amongst each other. For example if a child looks nervous at the event then the likelihood of that person being a threat is not as much as say if it was a grown man who was nervous. To break this down a bit further, here there are two features we are considering, age AND nervousness. Say we look at these features individually, we could design a model that flags ALL persons that are nervous as potential threats. However, it is likely that we will have a lot of false positives as there is a strong chance that minors present at the event will be nervous. Hence by considering the age of a person along with the 'nervousness' feature we would definitely get a more accurate result as to who are potential threats and who aren't. This is the 'Naive' bit of the theorem where it considers each feature to be independant of each other which may not always be the case and hence that can affect the final judgement. In short, the Bayes theorem calculates the probability of a certain event happening(in our case, a message being spam) based on the joint probabilistic distributions of certain other events(in our case, a message being classified as spam). We will dive into the workings of the Bayes theorem later in the mission, but first, let us understand the data we are going to work with. Step 1.1: Understanding our dataset ### We will be using a 'https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection' dataset from the UCI Machine Learning repository which has a very good collection of datasets for experimental research purposes. Here's a preview of the data: <img src="images/dqnb.png" height="1242" width="1242"> The columns in the data set are currently not named and as you can see, there are 2 columns. The first column takes two values, 'ham' which signifies that the message is not spam, and 'spam' which signifies that the message is spam. The second column is the text content of the SMS message that is being classified. Instructions: * Import the dataset into a pandas dataframe using the read_table method. Because this is a tab separated dataset we will be using '\t' as the value for the 'sep' argument which specifies this format. * Also, rename the column names by specifying a list ['label, 'sms_message'] to the 'names' argument of read_table(). * Print the first five values of the dataframe with the new column names. End of explanation """ ''' Solution ''' df['label'] = df.label.map({'ham':0, 'spam':1}) print(df.shape) df.head() # returns (rows, columns) """ Explanation: Step 1.2: Data Preprocessing Now that we have a basic understanding of what our dataset looks like, lets convert our labels to binary variables, 0 to represent 'ham'(i.e. not spam) and 1 to represent 'spam' for ease of computation. You might be wondering why do we need to do this step? The answer to this lies in how scikit-learn handles inputs. Scikit-learn only deals with numerical values and hence if we were to leave our label values as strings, scikit-learn would do the conversion internally(more specifically, the string labels will be cast to unknown float values). Our model would still be able to make predictions if we left our labels as strings but we could have issues later when calculating performance metrics, for example when calculating our precision and recall scores. Hence, to avoid unexpected 'gotchas' later, it is good practice to have our categorical values be fed into our model as integers. Instructions: * Convert the values in the 'label' colum to numerical values using map method as follows: {'ham':0, 'spam':1} This maps the 'ham' value to 0 and the 'spam' value to 1. * Also, to get an idea of the size of the dataset we are dealing with, print out number of rows and columns using 'shape'. End of explanation """ ''' Solution: ''' documents = ['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?'] lower_case_documents = [] for i in documents: lower_case_documents.append(i.lower()) print(lower_case_documents) """ Explanation: Step 2.1: Bag of words What we have here in our data set is a large collection of text data (5,572 rows of data). Most ML algorithms rely on numerical data to be fed into them as input, and email/sms messages are usually text heavy. Here we'd like to introduce the Bag of Words(BoW) concept which is a term used to specify the problems that have a 'bag of words' or a collection of text data that needs to be worked with. The basic idea of BoW is to take a piece of text and count the frequency of the words in that text. It is important to note that the BoW concept treats each word individually and the order in which the words occur does not matter. Using a process which we will go through now, we can covert a collection of documents to a matrix, with each document being a row and each word(token) being the column, and the corresponding (row,column) values being the frequency of occurrance of each word or token in that document. For example: Lets say we have 4 documents as follows: ['Hello, how are you!', 'Win money, win from home.', 'Call me now', 'Hello, Call you tomorrow?'] Our objective here is to convert this set of text to a frequency distribution matrix, as follows: <img src="images/countvectorizer.png" height="542" width="542"> Here as we can see, the documents are numbered in the rows, and each word is a column name, with the corresponding value being the frequency of that word in the document. Lets break this down and see how we can do this conversion using a small set of documents. To handle this, we will be using sklearns <a href = 'http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer'> sklearn.feature_extraction.text.CountVectorizer </a> method which does the following: It tokenizes the string(separates the string into individual words) and gives an integer ID to each token. It counts the occurrance of each of those tokens. Please Note: The CountVectorizer method automatically converts all tokenized words to their lower case form so that it does not treat words like 'He' and 'he' differently. It does this using the lowercase parameter which is by default set to True. It also ignores all punctuation so that words followed by a punctuation mark (for example: 'hello!') are not treated differently than the same words not prefixed or suffixed by a punctuation mark (for example: 'hello'). It does this using the token_pattern parameter which has a default regular expression which selects tokens of 2 or more alphanumeric characters. The third parameter to take note of is the stop_words parameter. Stop words refer to the most commonly used words in a language. They include words like 'am', 'an', 'and', 'the' etc. By setting this parameter value to english, CountVectorizer will automatically ignore all words(from our input text) that are found in the built in list of english stop words in scikit-learn. This is extremely helpful as stop words can skew our calculations when we are trying to find certain key words that are indicative of spam. We will dive into the application of each of these into our model in a later step, but for now it is important to be aware of such preprocessing techniques available to us when dealing with textual data. Step 2.2: Implementing Bag of Words from scratch Before we dive into scikit-learn's Bag of Words(BoW) library to do the dirty work for us, let's implement it ourselves first so that we can understand what's happening behind the scenes. Step 1: Convert all strings to their lower case form. Let's say we have a document set: documents = ['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?'] Instructions: * Convert all the strings in the documents set to their lower case. Save them into a list called 'lower_case_documents'. You can convert strings to their lower case in python by using the lower() method. End of explanation """ ''' Solution: ''' sans_punctuation_documents = [] import string for i in lower_case_documents: sans_punctuation_documents.append(i.translate(str.maketrans('', '', string.punctuation))) print(sans_punctuation_documents) """ Explanation: Step 2: Removing all punctuations Instructions: Remove all punctuation from the strings in the document set. Save them into a list called 'sans_punctuation_documents'. End of explanation """ ''' Solution: ''' preprocessed_documents = [] for i in sans_punctuation_documents: preprocessed_documents.append(i.split(' ')) print(preprocessed_documents) """ Explanation: Step 3: Tokenization Tokenizing a sentence in a document set means splitting up a sentence into individual words using a delimiter. The delimiter specifies what character we will use to identify the beginning and the end of a word(for example we could use a single space as the delimiter for identifying words in our document set.) Instructions: Tokenize the strings stored in 'sans_punctuation_documents' using the split() method. and store the final document set in a list called 'preprocessed_documents'. End of explanation """ ''' Solution ''' frequency_list = [] import pprint from collections import Counter for i in preprocessed_documents: frequency_counts = Counter(i) frequency_list.append(frequency_counts) pprint.pprint(frequency_list) """ Explanation: Step 4: Count frequencies Now that we have our document set in the required format, we can proceed to counting the occurrence of each word in each document of the document set. We will use the Counter method from the Python collections library for this purpose. Counter counts the occurrence of each item in the list and returns a dictionary with the key as the item being counted and the corresponding value being the count of that item in the list. Instructions: Using the Counter() method and preprocessed_documents as the input, create a dictionary with the keys being each word in each document and the corresponding values being the frequncy of occurrence of that word. Save each Counter dictionary as an item in a list called 'frequency_list'. End of explanation """ ''' Here we will look to create a frequency matrix on a smaller document set to make sure we understand how the document-term matrix generation happens. We have created a sample document set 'documents'. ''' documents = ['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?'] """ Explanation: Congratulations! You have implemented the Bag of Words process from scratch! As we can see in our previous output, we have a frequency distribution dictionary which gives a clear view of the text that we are dealing with. We should now have a solid understanding of what is happening behind the scenes in the sklearn.feature_extraction.text.CountVectorizer method of scikit-learn. We will now implement sklearn.feature_extraction.text.CountVectorizer method in the next step. Step 2.3: Implementing Bag of Words in scikit-learn Now that we have implemented the BoW concept from scratch, let's go ahead and use scikit-learn to do this process in a clean and succinct way. We will use the same document set as we used in the previous step. End of explanation """ ''' Solution ''' from sklearn.feature_extraction.text import CountVectorizer count_vector = CountVectorizer() """ Explanation: Instructions: Import the sklearn.feature_extraction.text.CountVectorizer method and create an instance of it called 'count_vector'. End of explanation """ ''' Practice node: Print the 'count_vector' object which is an instance of 'CountVectorizer()' ''' print(count_vector) """ Explanation: Data preprocessing with CountVectorizer() In Step 2.2, we implemented a version of the CountVectorizer() method from scratch that entailed cleaning our data first. This cleaning involved converting all of our data to lower case and removing all punctuation marks. CountVectorizer() has certain parameters which take care of these steps for us. They are: lowercase = True The lowercase parameter has a default value of True which converts all of our text to its lower case form. token_pattern = (?u)\\b\\w\\w+\\b The token_pattern parameter has a default regular expression value of (?u)\\b\\w\\w+\\b which ignores all punctuation marks and treats them as delimiters, while accepting alphanumeric strings of length greater than or equal to 2, as individual tokens or words. stop_words The stop_words parameter, if set to english will remove all words from our document set that match a list of English stop words which is defined in scikit-learn. Considering the size of our dataset and the fact that we are dealing with SMS messages and not larger text sources like e-mail, we will not be setting this parameter value. You can take a look at all the parameter values of your count_vector object by simply printing out the object as follows: End of explanation """ ''' Solution: ''' count_vector.fit(documents) count_vector.get_feature_names() """ Explanation: Instructions: Fit your document dataset to the CountVectorizer object you have created using fit(), and get the list of words which have been categorized as features using the get_feature_names() method. End of explanation """ ''' Solution ''' doc_array = count_vector.transform(documents).toarray() doc_array """ Explanation: The get_feature_names() method returns our feature names for this dataset, which is the set of words that make up our vocabulary for 'documents'. Instructions: Create a matrix with the rows being each of the 4 documents, and the columns being each word. The corresponding (row, column) value is the frequency of occurrance of that word(in the column) in a particular document(in the row). You can do this using the transform() method and passing in the document data set as the argument. The transform() method returns a matrix of numpy integers, you can convert this to an array using toarray(). Call the array 'doc_array' End of explanation """ ''' Solution ''' frequency_matrix = pd.DataFrame(doc_array, columns = count_vector.get_feature_names()) frequency_matrix """ Explanation: Now we have a clean representation of the documents in terms of the frequency distribution of the words in them. To make it easier to understand our next step is to convert this array into a dataframe and name the columns appropriately. Instructions: Convert the array we obtained, loaded into 'doc_array', into a dataframe and set the column names to the word names(which you computed earlier using get_feature_names(). Call the dataframe 'frequency_matrix'. End of explanation """ ''' Solution ''' # split into training and testing sets from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(df['sms_message'], df['label'], random_state=1) print('Number of rows in the total set: {}'.format(df.shape[0])) print('Number of rows in the training set: {}'.format(X_train.shape[0])) print('Number of rows in the test set: {}'.format(X_test.shape[0])) """ Explanation: Congratulations! You have successfully implemented a Bag of Words problem for a document dataset that we created. One potential issue that can arise from using this method out of the box is the fact that if our dataset of text is extremely large(say if we have a large collection of news articles or email data), there will be certain values that are more common that others simply due to the structure of the language itself. So for example words like 'is', 'the', 'an', pronouns, grammatical contructs etc could skew our matrix and affect our analyis. There are a couple of ways to mitigate this. One way is to use the stop_words parameter and set its value to english. This will automatically ignore all words(from our input text) that are found in a built in list of English stop words in scikit-learn. Another way of mitigating this is by using the <a href = 'http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html#sklearn.feature_extraction.text.TfidfVectorizer'> sklearn.feature_extraction.text.TfidfVectorizer</a> method. This method is out of scope for the context of this lesson. Step 3.1: Training and testing sets Now that we have understood how to deal with the Bag of Words problem we can get back to our dataset and proceed with our analysis. Our first step in this regard would be to split our dataset into a training and testing set so we can test our model later. Instructions: Split the dataset into a training and testing set by using the train_test_split method in sklearn. Split the data using the following variables: * X_train is our training data for the 'sms_message' column. * y_train is our training data for the 'label' column * X_test is our testing data for the 'sms_message' column. * y_test is our testing data for the 'label' column Print out the number of rows we have in each our training and testing data. End of explanation """ ''' [Practice Node] The code for this segment is in 2 parts. Firstly, we are learning a vocabulary dictionary for the training data and then transforming the data into a document-term matrix; secondly, for the testing data we are only transforming the data into a document-term matrix. This is similar to the process we followed in Step 2.3 We will provide the transformed data to students in the variables 'training_data' and 'testing_data'. ''' ''' Solution ''' # Instantiate the CountVectorizer method count_vector = CountVectorizer() # Fit the training data and then return the matrix training_data = count_vector.fit_transform(X_train) # Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer() testing_data = count_vector.transform(X_test) """ Explanation: Step 3.2: Applying Bag of Words processing to our dataset. Now that we have split the data, our next objective is to follow the steps from Step 2: Bag of words and convert our data into the desired matrix format. To do this we will be using CountVectorizer() as we did before. There are two steps to consider here: Firstly, we have to fit our training data (X_train) into CountVectorizer() and return the matrix. Secondly, we have to transform our testing data (X_test) to return the matrix. Note that X_train is our training data for the 'sms_message' column in our dataset and we will be using this to train our model. X_test is our testing data for the 'sms_message' column and this is the data we will be using(after transformation to a matrix) to make predictions on. We will then compare those predictions with y_test in a later step. For now, we have provided the code that does the matrix transformations for you! End of explanation """ ''' Instructions: Calculate probability of getting a positive test result, P(Pos) ''' ''' Solution (skeleton code will be provided) ''' # P(D) p_diabetes = 0.01 # P(~D) p_no_diabetes = 0.99 # Sensitivity or P(Pos|D) p_pos_diabetes = 0.9 # Specificity or P(Neg/~D) p_neg_no_diabetes = 0.9 # P(Pos) p_pos = (p_diabetes * p_pos_diabetes) + (p_no_diabetes * (1 - p_neg_no_diabetes)) print('The probability of getting a positive test result P(Pos) is: {}',format(p_pos)) """ Explanation: Step 4.1: Bayes Theorem implementation from scratch Now that we have our dataset in the format that we need, we can move onto the next portion of our mission which is the algorithm we will use to make our predictions to classify a message as spam or not spam. Remember that at the start of the mission we briefly discussed the Bayes theorem but now we shall go into a little more detail. In layman's terms, the Bayes theorem calculates the probability of an event occurring, based on certain other probabilities that are related to the event in question. It is composed of a prior(the probabilities that we are aware of or that is given to us) and the posterior(the probabilities we are looking to compute using the priors). Let us implement the Bayes Theorem from scratch using a simple example. Let's say we are trying to find the odds of an individual having diabetes, given that he or she was tested for it and got a positive result. In the medical field, such probabilies play a very important role as it usually deals with life and death situatuations. We assume the following: P(D) is the probability of a person having Diabetes. It's value is 0.01 or in other words, 1% of the general population has diabetes(Disclaimer: these values are assumptions and are not reflective of any medical study). P(Pos) is the probability of getting a positive test result. P(Neg) is the probability of getting a negative test result. P(Pos|D) is the probability of getting a positive result on a test done for detecting diabetes, given that you have diabetes. This has a value 0.9. In other words the test is correct 90% of the time. This is also called the Sensitivity or True Positive Rate. P(Neg|~D) is the probability of getting a negative result on a test done for detecting diabetes, given that you do not have diabetes. This also has a value of 0.9 and is therefore correct, 90% of the time. This is also called the Specificity or True Negative Rate. The Bayes formula is as follows: <img src="images/bayes_formula.png" height="242" width="242"> P(A) is the prior probability of A occuring independantly. In our example this is P(D). This value is given to us. P(B) is the prior probability of B occuring independantly. In our example this is P(Pos). P(A|B) is the posterior probability that A occurs given B. In our example this is P(D|Pos). That is, the probability of an individual having diabetes, given that, that individual got a positive test result. This is the value that we are looking to calculate. P(B|A) is the likelihood probability of B occuring, given A. In our example this is P(Pos|D). This value is given to us. Putting our values into the formula for Bayes theorem we get: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos) The probability of getting a positive test result P(Pos) can be calulated using the Sensitivity and Specificity as follows: P(Pos) = [P(D) * Sensitivity] + [P(~D) * (1-Specificity))] End of explanation """ ''' Instructions: Compute the probability of an individual having diabetes, given that, that individual got a positive test result. In other words, compute P(D|Pos). The formula is: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos) ''' ''' Solution ''' # P(D|Pos) p_diabetes_pos = (p_diabetes * p_pos_diabetes) / p_pos print('Probability of an individual having diabetes, given that that individual got a positive test result is:\ ',format(p_diabetes_pos)) ''' Instructions: Compute the probability of an individual not having diabetes, given that, that individual got a positive test result. In other words, compute P(~D|Pos). The formula is: P(~D|Pos) = (P(~D) * P(Pos|~D) / P(Pos) Note that P(Pos/~D) can be computed as 1 - P(Neg/~D). Therefore: P(Pos/~D) = p_pos_no_diabetes = 1 - 0.9 = 0.1 ''' ''' Solution ''' # P(Pos/~D) p_pos_no_diabetes = 0.1 # P(~D|Pos) p_no_diabetes_pos = (p_no_diabetes * p_pos_no_diabetes) / p_pos print 'Probability of an individual not having diabetes, given that that individual got a positive test result is:'\ ,p_no_diabetes_pos """ Explanation: Using all of this information we can calculate our posteriors as follows: The probability of an individual having diabetes, given that, that individual got a positive test result: P(D/Pos) = (P(D) * Sensitivity)) / P(Pos) The probability of an individual not having diabetes, given that, that individual got a positive test result: P(~D/Pos) = (P(~D) * (1-Specificity)) / P(Pos) The sum of our posteriors will always equal 1. End of explanation """ ''' Instructions: Compute the probability of the words 'freedom' and 'immigration' being said in a speech, or P(F,I). The first step is multiplying the probabilities of Jill Stein giving a speech with her individual probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_j_text The second step is multiplying the probabilities of Gary Johnson giving a speech with his individual probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_g_text The third step is to add both of these probabilities and you will get P(F,I). ''' ''' Solution: Step 1 ''' # P(J) p_j = 0.5 # P(J|F) p_j_f = 0.1 # P(J|I) p_j_i = 0.1 p_j_text = p_j * p_j_f * p_j_i print(p_j_text) ''' Solution: Step 2 ''' # P(G) p_g = 0.5 # P(G|F) p_g_f = 0.7 # P(G|I) p_g_i = 0.2 p_g_text = p_g * p_g_f * p_g_i print(p_g_text) ''' Solution: Step 3: Compute P(F,I) and store in p_f_i ''' p_f_i = p_j_text + p_g_text print('Probability of words freedom and immigration being said are: ', format(p_f_i)) """ Explanation: Congratulations! You have implemented Bayes theorem from scratch. Your analysis shows that even if you get a positive test result, there is only a 8.3% chance that you actually have diabetes and a 91.67% chance that you do not have diabetes. This is of course assuming that only 1% of the entire population has diabetes which of course is only an assumption. What does the term 'Naive' in 'Naive Bayes' mean ? The term 'Naive' in Naive Bayes comes from the fact that the algorithm considers the features that it is using to make the predictions to be independent of each other, which may not always be the case. So in our Diabetes example, we are considering only one feature, that is the test result. Say we added another feature, 'exercise'. Let's say this feature has a binary value of 0 and 1, where the former signifies that the individual exercises less than or equal to 2 days a week and the latter signifies that the individual exercises greater than or equal to 3 days a week. If we had to use both of these features, namely the test result and the value of the 'exercise' feature, to compute our final probabilities, Bayes' theorem would fail. Naive Bayes' is an extension of Bayes' theorem that assumes that all the features are independent of each other. Step 4.2: Naive Bayes implementation from scratch Now that you have understood the ins and outs of Bayes Theorem, we will extend it to consider cases where we have more than feature. Let's say that we have two political parties' candidates, 'Jill Stein' of the Green Party and 'Gary Johnson' of the Libertarian Party and we have the probabilities of each of these candidates saying the words 'freedom', 'immigration' and 'environment' when they give a speech: Probability that Jill Stein says 'freedom': 0.1 ---------> P(J|F) Probability that Jill Stein says 'immigration': 0.1 -----> P(J|I) Probability that Jill Stein says 'environment': 0.8 -----> P(J|E) Probability that Gary Johnson says 'freedom': 0.7 -------> P(G|F) Probability that Gary Johnson says 'immigration': 0.2 ---> P(G|I) Probability that Gary Johnson says 'environment': 0.1 ---> P(G|E) And let us also assume that the probablility of Jill Stein giving a speech, P(J) is 0.5 and the same for Gary Johnson, P(G) = 0.5. Given this, what if we had to find the probabilities of Jill Stein saying the words 'freedom' and 'immigration'? This is where the Naive Bayes'theorem comes into play as we are considering two features, 'freedom' and 'immigration'. Now we are at a place where we can define the formula for the Naive Bayes' theorem: <img src="images/naivebayes.png" height="342" width="342"> Here, y is the class variable or in our case the name of the candidate and x1 through xn are the feature vectors or in our case the individual words. The theorem makes the assumption that each of the feature vectors or words (xi) are independent of each other. To break this down, we have to compute the following posterior probabilities: P(J|F,I): Probability of Jill Stein saying the words Freedom and Immigration. Using the formula and our knowledge of Bayes' theorem, we can compute this as follows: P(J|F,I) = (P(J) * P(J|F) * P(J|I)) / P(F,I). Here P(F,I) is the probability of the words 'freedom' and 'immigration' being said in a speech. P(G|F,I): Probability of Gary Johnson saying the words Freedom and Immigration. Using the formula, we can compute this as follows: P(G|F,I) = (P(G) * P(G|F) * P(G|I)) / P(F,I) End of explanation """ ''' Instructions: Compute P(J|F,I) using the formula P(J|F,I) = (P(J) * P(J|F) * P(J|I)) / P(F,I) and store it in a variable p_j_fi ''' ''' Solution ''' p_j_fi = p_j_text / p_f_i print('The probability of Jill Stein saying the words Freedom and Immigration: ', format(p_j_fi)) ''' Instructions: Compute P(G|F,I) using the formula P(G|F,I) = (P(G) * P(G|F) * P(G|I)) / P(F,I) and store it in a variable p_g_fi ''' ''' Solution ''' p_g_fi = p_g_text / p_f_i print('The probability of Gary Johnson saying the words Freedom and Immigration: ', format(p_g_fi)) """ Explanation: Now we can compute the probability of P(J|F,I), that is the probability of Jill Stein saying the words Freedom and Immigration and P(G|F,I), that is the probability of Gary Johnson saying the words Freedom and Immigration. End of explanation """ ''' Instructions: We have loaded the training data into the variable 'training_data' and the testing data into the variable 'testing_data'. Import the MultinomialNB classifier and fit the training data into the classifier using fit(). Name your classifier 'naive_bayes'. You will be training the classifier using 'training_data' and y_train' from our split earlier. ''' ''' Solution ''' from sklearn.naive_bayes import MultinomialNB naive_bayes = MultinomialNB() naive_bayes.fit(training_data, y_train) ''' Instructions: Now that our algorithm has been trained using the training data set we can now make some predictions on the test data stored in 'testing_data' using predict(). Save your predictions into the 'predictions' variable. ''' ''' Solution ''' predictions = naive_bayes.predict(testing_data) """ Explanation: And as we can see, just like in the Bayes' theorem case, the sum of our posteriors is equal to 1. Congratulations! You have implemented the Naive Bayes' theorem from scratch. Our analysis shows that there is only a 6.6% chance that Jill Stein of the Green Party uses the words 'freedom' and 'immigration' in her speech as compard the the 93.3% chance for Gary Johnson of the Libertarian party. Another more generic example of Naive Bayes' in action is as when we search for the term 'Sacramento Kings' in a search engine. In order for us to get the results pertaining to the Scramento Kings NBA basketball team, the search engine needs to be able to associate the two words together and not treat them individually, in which case we would get results of images tagged with 'Sacramento' like pictures of city landscapes and images of 'Kings' which could be pictures of crowns or kings from history when what we are looking to get are images of the basketball team. This is a classic case of the search engine treating the words as independant entities and hence being 'naive' in its approach. Applying this to our problem of classifying messages as spam, the Naive Bayes algorithm looks at each word individually and not as associated entities with any kind of link between them. In the case of spam detectors, this usually works as there are certain red flag words which can almost guarantee its classification as spam, for example emails with words like 'viagra' are usually classified as spam. Step 5: Naive Bayes implementation using scikit-learn Thankfully, sklearn has several Naive Bayes implementations that we can use and so we do not have to do the math from scratch. We will be using sklearns sklearn.naive_bayes method to make predictions on our dataset. Specifically, we will be using the multinomial Naive Bayes implementation. This particular classifier is suitable for classification with discrete features (such as in our case, word counts for text classification). It takes in integer word counts as its input. On the other hand Gaussian Naive Bayes is better suited for continuous data as it assumes that the input data has a Gaussian(normal) distribution. End of explanation """ ''' Instructions: Compute the accuracy, precision, recall and F1 scores of your model using your test data 'y_test' and the predictions you made earlier stored in the 'predictions' variable. ''' ''' Solution ''' from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score print('Accuracy score: ', format(accuracy_score(y_test, predictions))) print('Precision score: ', format(precision_score(y_test, predictions))) print('Recall score: ', format(recall_score(y_test, predictions))) print('F1 score: ', format(f1_score(y_test, predictions))) """ Explanation: Now that predictions have been made on our test set, we need to check the accuracy of our predictions. Step 6: Evaluating our model Now that we have made predictions on our test set, our next goal is to evaluate how well our model is doing. There are various mechanisms for doing so, but first let's do quick recap of them. Accuracy measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points). Precision tells us what proportion of messages we classified as spam, actually were spam. It is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classificatio), in other words it is the ratio of [True Positives/(True Positives + False Positives)] Recall(sensitivity) tells us what proportion of messages that actually were spam were classified by us as spam. It is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of [True Positives/(True Positives + False Negatives)] For classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score. We will be using all 4 metrics to make sure our model does well. For all 4 metrics whose values can range from 0 to 1, having a score as close to 1 as possible is a good indicator of how well our model is doing. End of explanation """
metpy/MetPy
v0.11/_downloads/c1a3b4ec1d09d4debc078297d433a9b2/Point_Interpolation.ipynb
bsd-3-clause
import cartopy.crs as ccrs import cartopy.feature as cfeature from matplotlib.colors import BoundaryNorm import matplotlib.pyplot as plt import numpy as np from metpy.cbook import get_test_data from metpy.interpolate import (interpolate_to_grid, remove_nan_observations, remove_repeat_coordinates) from metpy.plots import add_metpy_logo def basic_map(proj): """Make our basic default map for plotting""" fig = plt.figure(figsize=(15, 10)) add_metpy_logo(fig, 0, 80, size='large') view = fig.add_axes([0, 0, 1, 1], projection=proj) view.set_extent([-120, -70, 20, 50]) view.add_feature(cfeature.STATES.with_scale('50m')) view.add_feature(cfeature.OCEAN) view.add_feature(cfeature.COASTLINE) view.add_feature(cfeature.BORDERS, linestyle=':') return fig, view def station_test_data(variable_names, proj_from=None, proj_to=None): with get_test_data('station_data.txt') as f: all_data = np.loadtxt(f, skiprows=1, delimiter=',', usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19), dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'), ('slp', 'f'), ('air_temperature', 'f'), ('cloud_fraction', 'f'), ('dewpoint', 'f'), ('weather', '16S'), ('wind_dir', 'f'), ('wind_speed', 'f')])) all_stids = [s.decode('ascii') for s in all_data['stid']] data = np.concatenate([all_data[all_stids.index(site)].reshape(1, ) for site in all_stids]) value = data[variable_names] lon = data['lon'] lat = data['lat'] if proj_from is not None and proj_to is not None: try: proj_points = proj_to.transform_points(proj_from, lon, lat) return proj_points[:, 0], proj_points[:, 1], value except Exception as e: print(e) return None return lon, lat, value from_proj = ccrs.Geodetic() to_proj = ccrs.AlbersEqualArea(central_longitude=-97.0000, central_latitude=38.0000) levels = list(range(-20, 20, 1)) cmap = plt.get_cmap('magma') norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True) x, y, temp = station_test_data('air_temperature', from_proj, to_proj) x, y, temp = remove_nan_observations(x, y, temp) x, y, temp = remove_repeat_coordinates(x, y, temp) """ Explanation: Point Interpolation Compares different point interpolation approaches. End of explanation """ gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='linear', hres=75000) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) """ Explanation: Scipy.interpolate linear End of explanation """ gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='natural_neighbor', hres=75000) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) """ Explanation: Natural neighbor interpolation (MetPy implementation) Reference &lt;https://github.com/Unidata/MetPy/files/138653/cwp-657.pdf&gt;_ End of explanation """ gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='cressman', minimum_neighbors=1, hres=75000, search_radius=100000) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) """ Explanation: Cressman interpolation search_radius = 100 km grid resolution = 25 km min_neighbors = 1 End of explanation """ gx, gy, img1 = interpolate_to_grid(x, y, temp, interp_type='barnes', hres=75000, search_radius=100000) img1 = np.ma.masked_where(np.isnan(img1), img1) fig, view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img1, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) """ Explanation: Barnes Interpolation search_radius = 100km min_neighbors = 3 End of explanation """ gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='rbf', hres=75000, rbf_func='linear', rbf_smooth=0) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) plt.show() """ Explanation: Radial basis function interpolation linear End of explanation """
sonelu/pypot
samples/notebooks/Benchmark your Poppy robot.ipynb
gpl-3.0
from __future__ import print_function, division from ipywidgets import interact %pylab inline """ Explanation: Benchmark your Poppy robot The goal of this notebook is to help you identify the performance of your robot and where the bottle necks are. We will measure: * the time to read/write the position to one motor (for each of your dynamixel bus) * the time to read/write the positions for all motors (for each of your dynamixel bus) * the regularity of the synchronization loop of pos/speed/load when * only this loop is runnnig * all other synchronization loops are running * everything else is running End of explanation """ results = {} """ Explanation: All bench info will be stored in this dictionary so it's easy to compare with other platforms. End of explanation """ import platform p = platform.platform() print(p) results['platform'] = p import sys v = sys.version print(v) results['python'] = v import pypot import poppy.creatures results['pypot'] = pypot.__version__ print('Pypot version: {}'.format(results['pypot'])) results['poppy-creature'] = poppy.creatures.__version__ print('Poppy-creature version: {}'.format(results['poppy-creature'])) from poppy.creatures import installed_poppy_creatures RobotCls = None def robot_selector(robot): global RobotCls RobotCls = robot interact(robot_selector, robot=installed_poppy_creatures); robot = RobotCls() results['robot'] = RobotCls """ Explanation: What's the platform End of explanation """ for m in robot.motors: m.compliant = True """ Explanation: Make sure all motors are turned off to avoid breaking anything: End of explanation """ import time from pypot.dynamixel.syncloop import MetaDxlController from pypot.dynamixel.controller import PosSpeedLoadDxlController meta_controllers = [c for c in robot._controllers if isinstance(c, MetaDxlController)] controllers = [cc for cc in c.controllers for c in meta_controllers if isinstance(cc, PosSpeedLoadDxlController)] for c in controllers: c.stop() for c in controllers: def wrapped_update(): if not hasattr(c, 't'): c.t = [] c.t.append(time.time()) c.update() c._update = wrapped_update for c in controllers: c.start() """ Explanation: We find the synchronization loop for pos/speed/load and monkey patch them for monitoring. End of explanation """ import psutil def monitor(controllers, duration): for c in controllers: c.stop() c.t = [] c.start() cpu = [] start = time.time() while time.time() - start < duration: time.sleep(1.0) cpu.append(psutil.cpu_percent()) print('Avg CPU usage: {}%'.format(mean(cpu))) return {c: array(c.t) for c in controllers} def freq_plot(logs): for c, t in logs.items(): dt = diff(t) freq = 1.0 / dt print('Avg frq for controller {}: {}ms STD={}ms'.format(c.ids, freq.mean(), freq.std())) hist(freq) xlim(0, 100) """ Explanation: Now, we define our monitor and plotting functions. End of explanation """ def follow_trajectory(motor, duration=5, freq=50): t = linspace(0, duration, duration * freq) a1, f1 = 10.0, 1.0 a2, f2 = 5.0, 0.5 traj = a1 * sin(2 * pi * f1 * t) + a2 * sin(2 * pi * f2 * t) rec = [] motor.compliant = False motor.moving_speed = 0 motor.goal_position = 0 time.sleep(1.) for p in traj: motor.goal_position = p rec.append(motor.present_position) time.sleep(1.0 / freq) motor.compliant = True plot(traj) plot(rec) """ Explanation: We also define this follow trajectory function, which applies a sinus on one motor (choosen below) and plot how close is its real position from the target one: End of explanation """ motor = None def motor_selector(m): global motor motor = getattr(robot, m) interact(motor_selector, m=[m.name for m in robot.motors]); """ Explanation: Now choose which motor you want to use for the follow trajectory test. It should be able to move freely from -20 to +20 degrees. End of explanation """ duration = 30 """ Explanation: Benchmark Our benchmark duration in seconds: End of explanation """ d = monitor(controllers, duration) freq_plot(d) results['normal'] = d follow_trajectory(motor) """ Explanation: Normal usage End of explanation """ for p in robot.primitives: p.stop() robot._primitive_manager.stop() d = monitor(controllers, duration) freq_plot(d) results['without primitive'] = d follow_trajectory(motor) """ Explanation: Without primitives End of explanation """ for s in robot.sensors: s.close() d = monitor(controllers, duration) freq_plot(d) results['without sensor'] = d follow_trajectory(motor) """ Explanation: Without all sensors End of explanation """
ChadFulton/statsmodels
examples/notebooks/glm_weights.ipynb
bsd-3-clause
import numpy as np import pandas as pd import statsmodels.formula.api as smf import statsmodels.api as sm """ Explanation: Weighted Generalized Linear Models End of explanation """ print(sm.datasets.fair.NOTE) """ Explanation: Weighted GLM: Poisson response data Load data In this example, we'll use the affair dataset using a handful of exogenous variables to predict the extra-marital affair rate. Weights will be generated to show that freq_weights are equivalent to repeating records of data. On the other hand, var_weights is equivalent to aggregating data. End of explanation """ data = sm.datasets.fair.load_pandas().data """ Explanation: Load the data into a pandas dataframe. End of explanation """ data.describe() data[:3] """ Explanation: The dependent (endogenous) variable is affairs End of explanation """ data["affairs"] = np.ceil(data["affairs"]) data[:3] (data["affairs"] == 0).mean() np.bincount(data["affairs"].astype(int)) """ Explanation: In the following we will work mostly with Poisson. While using decimal affairs works, we convert them to integers to have a count distribution. End of explanation """ data2 = data.copy() data2['const'] = 1 dc = data2['affairs rate_marriage age yrs_married const'.split()].groupby('affairs rate_marriage age yrs_married'.split()).count() dc.reset_index(inplace=True) dc.rename(columns={'const': 'freq'}, inplace=True) print(dc.shape) dc.head() """ Explanation: Condensing and Aggregating observations We have 6366 observations in our original dataset. When we consider only some selected variables, then we have fewer unique observations. In the following we combine observations in two ways, first we combine observations that have values for all variables identical, and secondly we combine observations that have the same explanatory variables. Dataset with unique observations We use pandas's groupby to combine identical observations and create a new variable freq that count how many observation have the values in the corresponding row. End of explanation """ gr = data['affairs rate_marriage age yrs_married'.split()].groupby('rate_marriage age yrs_married'.split()) df_a = gr.agg(['mean', 'sum','count']) def merge_tuple(tpl): if isinstance(tpl, tuple) and len(tpl) > 1: return "_".join(map(str, tpl)) else: return tpl df_a.columns = df_a.columns.map(merge_tuple) df_a.reset_index(inplace=True) print(df_a.shape) df_a.head() """ Explanation: Dataset with unique explanatory variables (exog) For the next dataset we combine observations that have the same values of the explanatory variables. However, because the response variable can differ among combined observations, we compute the mean and the sum of the response variable for all combined observations. We use again pandas groupby to combine observations and to create the new variables. We also flatten the MultiIndex into a simple index. End of explanation """ print('number of rows: \noriginal, with unique observations, with unique exog') data.shape[0], dc.shape[0], df_a.shape[0] """ Explanation: After combining observations with have a dataframe dc with 467 unique observations, and a dataframe df_a with 130 observations with unique values of the explanatory variables. End of explanation """ glm = smf.glm('affairs ~ rate_marriage + age + yrs_married', data=data, family=sm.families.Poisson()) res_o = glm.fit() print(res_o.summary()) res_o.pearson_chi2 / res_o.df_resid """ Explanation: Analysis In the following, we compare the GLM-Poisson results of the original data with models of the combined observations where the multiplicity or aggregation is given by weights or exposure. original data End of explanation """ glm = smf.glm('affairs ~ rate_marriage + age + yrs_married', data=dc, family=sm.families.Poisson(), freq_weights=np.asarray(dc['freq'])) res_f = glm.fit() print(res_f.summary()) res_f.pearson_chi2 / res_f.df_resid """ Explanation: condensed data (unique observations with frequencies) Combining identical observations and using frequency weights to take into account the multiplicity of observations produces exactly the same results. Some results attribute will differ when we want to have information about the observation and not about the aggregate of all identical observations. For example, residuals do not take freq_weights into account. End of explanation """ glm = smf.glm('affairs ~ rate_marriage + age + yrs_married', data=dc, family=sm.families.Poisson(), var_weights=np.asarray(dc['freq'])) res_fv = glm.fit() print(res_fv.summary()) """ Explanation: condensed using var_weights instead of freq_weights Next, we compare var_weights to freq_weights. It is a common practice to incorporate var_weights when the endogenous variable reflects averages and not identical observations. I don't see a theoretical reason why it produces the same results (in general). This produces the same results but df_resid differs the freq_weights example because var_weights do not change the number of effective observations. End of explanation """ res_fv.pearson_chi2 / res_fv.df_resid, res_f.pearson_chi2 / res_f.df_resid """ Explanation: Dispersion computed from the results is incorrect because of wrong df_resid. It is correct if we use the original df_resid. End of explanation """ glm = smf.glm('affairs_sum ~ rate_marriage + age + yrs_married', data=df_a, family=sm.families.Poisson(), exposure=np.asarray(df_a['affairs_count'])) res_e = glm.fit() print(res_e.summary()) res_e.pearson_chi2 / res_e.df_resid """ Explanation: aggregated or averaged data (unique values of explanatory variables) For these cases we combine observations that have the same values of the explanatory variables. The corresponding response variable is either a sum or an average. using exposure If our dependent variable is the sum of the responses of all combined observations, then under the Poisson assumption the distribution remains the same but we have varying exposure given by the number of individuals that are represented by one aggregated observation. The parameter estimates and covariance of parameters are the same with the original data, but log-likelihood, deviance and Pearson chi-squared differ End of explanation """ glm = smf.glm('affairs_mean ~ rate_marriage + age + yrs_married', data=df_a, family=sm.families.Poisson(), var_weights=np.asarray(df_a['affairs_count'])) res_a = glm.fit() print(res_a.summary()) """ Explanation: using var_weights We can also use the mean of all combined values of the dependent variable. In this case the variance will be related to the inverse of the total exposure reflected by one combined observation. End of explanation """ results_all = [res_o, res_f, res_e, res_a] names = 'res_o res_f res_e res_a'.split() pd.concat([r.params for r in results_all], axis=1, keys=names) pd.concat([r.bse for r in results_all], axis=1, keys=names) pd.concat([r.pvalues for r in results_all], axis=1, keys=names) pd.DataFrame(np.column_stack([[r.llf, r.deviance, r.pearson_chi2] for r in results_all]), columns=names, index=['llf', 'deviance', 'pearson chi2']) """ Explanation: Comparison We saw in the summary prints above that params and cov_params with associated Wald inference agree across versions. We summarize this in the following comparing individual results attributes across versions. Parameter estimates params, standard errors of the parameters bse and pvalues of the parameters for the tests that the parameters are zeros all agree. However, the likelihood and goodness-of-fit statistics, llf, deviance and pearson_chi2 only partially agree. Specifically, the aggregated version do not agree with the results using the original data. Warning: The behavior of llf, deviance and pearson_chi2 might still change in future versions. Both the sum and average of the response variable for unique values of the explanatory variables have a proper likelihood interpretation. However, this interpretation is not reflected in these three statistics. Computationally this might be due to missing adjustments when aggregated data is used. However, theoretically we can think in these cases, especially for var_weights of the misspecified case when likelihood analysis is inappropriate and the results should be interpreted as quasi-likelihood estimates. There is an ambiguity in the definition of var_weights because they can be used for averages with correctly specified likelihood as well as for variance adjustments in the quasi-likelihood case. We are currently not trying to match the likelihood specification. However, in the next section we show that likelihood ratio type tests still produce the same result for all aggregation versions when we assume that the underlying model is correctly specified. End of explanation """ glm = smf.glm('affairs ~ rate_marriage + yrs_married', data=data, family=sm.families.Poisson()) res_o2 = glm.fit() #print(res_f2.summary()) res_o2.pearson_chi2 - res_o.pearson_chi2, res_o2.deviance - res_o.deviance, res_o2.llf - res_o.llf glm = smf.glm('affairs ~ rate_marriage + yrs_married', data=dc, family=sm.families.Poisson(), freq_weights=np.asarray(dc['freq'])) res_f2 = glm.fit() #print(res_f2.summary()) res_f2.pearson_chi2 - res_f.pearson_chi2, res_f2.deviance - res_f.deviance, res_f2.llf - res_f.llf """ Explanation: Likelihood Ratio type tests We saw above that likelihood and related statistics do not agree between the aggregated and original, individual data. We illustrate in the following that likelihood ratio test and difference in deviance aggree across versions, however Pearson chi-squared does not. As before: This is not sufficiently clear yet and could change. As a test case we drop the age variable and compute the likelihood ratio type statistics as difference between reduced or constrained and full or unconstraint model. original observations and frequency weights End of explanation """ glm = smf.glm('affairs_sum ~ rate_marriage + yrs_married', data=df_a, family=sm.families.Poisson(), exposure=np.asarray(df_a['affairs_count'])) res_e2 = glm.fit() res_e2.pearson_chi2 - res_e.pearson_chi2, res_e2.deviance - res_e.deviance, res_e2.llf - res_e.llf glm = smf.glm('affairs_mean ~ rate_marriage + yrs_married', data=df_a, family=sm.families.Poisson(), var_weights=np.asarray(df_a['affairs_count'])) res_a2 = glm.fit() res_a2.pearson_chi2 - res_a.pearson_chi2, res_a2.deviance - res_a.deviance, res_a2.llf - res_a.llf """ Explanation: aggregated data: exposure and var_weights Note: LR test agrees with original observations, pearson_chi2 differs and has the wrong sign. End of explanation """ res_e2.pearson_chi2, res_e.pearson_chi2, (res_e2.resid_pearson**2).sum(), (res_e.resid_pearson**2).sum() res_e._results.resid_response.mean(), res_e.model.family.variance(res_e.mu)[:5], res_e.mu[:5] (res_e._results.resid_response**2 / res_e.model.family.variance(res_e.mu)).sum() res_e2._results.resid_response.mean(), res_e2.model.family.variance(res_e2.mu)[:5], res_e2.mu[:5] (res_e2._results.resid_response**2 / res_e2.model.family.variance(res_e2.mu)).sum() (res_e2._results.resid_response**2).sum(), (res_e._results.resid_response**2).sum() """ Explanation: Investigating Pearson chi-square statistic First, we do some sanity checks that there are no basic bugs in the computation of pearson_chi2 and resid_pearson. End of explanation """ ((res_e2._results.resid_response**2 - res_e._results.resid_response**2) / res_e2.model.family.variance(res_e2.mu)).sum() ((res_a2._results.resid_response**2 - res_a._results.resid_response**2) / res_a2.model.family.variance(res_a2.mu) * res_a2.model.var_weights).sum() ((res_f2._results.resid_response**2 - res_f._results.resid_response**2) / res_f2.model.family.variance(res_f2.mu) * res_f2.model.freq_weights).sum() ((res_o2._results.resid_response**2 - res_o._results.resid_response**2) / res_o2.model.family.variance(res_o2.mu)).sum() """ Explanation: One possible reason for the incorrect sign is that we are subtracting quadratic terms that are divided by different denominators. In some related cases, the recommendation in the literature is to use a common denominator. We can compare pearson chi-squared statistic using the same variance assumption in the full and reduced model. In this case we obtain the same pearson chi2 scaled difference between reduced and full model across all versions. (Issue #3616 is intended to track this further.) End of explanation """ np.exp(res_e2.model.exposure)[:5], np.asarray(df_a['affairs_count'])[:5] res_e2.resid_pearson.sum() - res_e.resid_pearson.sum() res_e2.mu[:5] res_a2.pearson_chi2, res_a.pearson_chi2, res_a2.resid_pearson.sum(), res_a.resid_pearson.sum() ((res_a2._results.resid_response**2) / res_a2.model.family.variance(res_a2.mu) * res_a2.model.var_weights).sum() ((res_a._results.resid_response**2) / res_a.model.family.variance(res_a.mu) * res_a.model.var_weights).sum() ((res_a._results.resid_response**2) / res_a.model.family.variance(res_a2.mu) * res_a.model.var_weights).sum() res_e.model.endog[:5], res_e2.model.endog[:5] res_a.model.endog[:5], res_a2.model.endog[:5] res_a2.model.endog[:5] * np.exp(res_e2.model.exposure)[:5] res_a2.model.endog[:5] * res_a2.model.var_weights[:5] from scipy import stats stats.chi2.sf(27.19530754604785, 1), stats.chi2.sf(29.083798806764687, 1) res_o.pvalues print(res_e2.summary()) print(res_e.summary()) print(res_f2.summary()) print(res_f.summary()) """ Explanation: Remainder The remainder of the notebook just contains some additional checks and can be ignored. End of explanation """
ELind77/gensim
docs/notebooks/Tensorboard_visualizations.ipynb
lgpl-2.1
import gensim import pandas as pd import smart_open import random # read data dataframe = pd.read_csv('movie_plots.csv') dataframe """ Explanation: TensorBoard Visualizations In this tutorial, we will learn how to visualize different types of NLP based Embeddings via TensorBoard. TensorBoard is a data visualization framework for visualizing and inspecting the TensorFlow runs and graphs. We will use a built-in Tensorboard visualizer called Embedding Projector in this tutorial. It lets you interactively visualize and analyze high-dimensional data like embeddings. Read Data For this tutorial, a transformed MovieLens dataset<sup>[1]</sup> is used. You can download the final prepared csv from here. End of explanation """ def read_corpus(documents): for i, plot in enumerate(documents): yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(plot, max_len=30), [i]) train_corpus = list(read_corpus(dataframe.Plots)) """ Explanation: 1. Visualizing Doc2Vec In this part, we will learn about visualizing Doc2Vec Embeddings aka Paragraph Vectors via TensorBoard. The input documents for training will be the synopsis of movies, on which Doc2Vec model is trained. <img src="Tensorboard.png"> The visualizations will be a scatterplot as seen in the above image, where each datapoint is labelled by the movie title and colored by it's corresponding genre. You can also visit this Projector link which is configured with my embeddings for the above mentioned dataset. Preprocess Text Below, we define a function to read the training documents, pre-process each document using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number. End of explanation """ train_corpus[:2] """ Explanation: Let's take a look at the training corpus. End of explanation """ model = gensim.models.doc2vec.Doc2Vec(size=50, min_count=2, iter=55) model.build_vocab(train_corpus) model.train(train_corpus, total_examples=model.corpus_count, epochs=model.iter) """ Explanation: Training the Doc2Vec Model We'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 55 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time. Small datasets with short documents, like this one, can benefit from more training passes. End of explanation """ model.save_word2vec_format('doc_tensor.w2v', doctag_vec=True, word_vec=False) """ Explanation: Now, we'll save the document embedding vectors per doctag. End of explanation """ %run ../../gensim/scripts/word2vec2tensor.py -i doc_tensor.w2v -o movie_plot """ Explanation: Prepare the Input files for Tensorboard Tensorboard takes two Input files. One containing the embedding vectors and the other containing relevant metadata. We'll use a gensim script to directly convert the embedding file saved in word2vec format above to the tsv format required in Tensorboard. End of explanation """ with open('movie_plot_metadata.tsv','w') as w: w.write('Titles\tGenres\n') for i,j in zip(dataframe.Titles, dataframe.Genres): w.write("%s\t%s\n" % (i,j)) """ Explanation: The script above generates two files, movie_plot_tensor.tsv which contain the embedding vectors and movie_plot_metadata.tsv containing doctags. But, these doctags are simply the unique index values and hence are not really useful to interpret what the document was while visualizing. So, we will overwrite movie_plot_metadata.tsv to have a custom metadata file with two columns. The first column will be for the movie titles and the second for their corresponding genres. End of explanation """ import pandas as pd import re from gensim.parsing.preprocessing import remove_stopwords, strip_punctuation from gensim.models import ldamodel from gensim.corpora.dictionary import Dictionary # read data dataframe = pd.read_csv('movie_plots.csv') # remove stopwords and punctuations def preprocess(row): return strip_punctuation(remove_stopwords(row.lower())) dataframe['Plots'] = dataframe['Plots'].apply(preprocess) # Convert data to required input format by LDA texts = [] for line in dataframe.Plots: lowered = line.lower() words = re.findall(r'\w+', lowered, flags = re.UNICODE | re.LOCALE) texts.append(words) # Create a dictionary representation of the documents. dictionary = Dictionary(texts) # Filter out words that occur less than 2 documents, or more than 30% of the documents. dictionary.filter_extremes(no_below=2, no_above=0.3) # Bag-of-words representation of the documents. corpus = [dictionary.doc2bow(text) for text in texts] """ Explanation: Now you can go to http://projector.tensorflow.org/ and upload the two files by clicking on Load data in the left panel. For demo purposes I have uploaded the Doc2Vec embeddings generated from the model trained above here. You can access the Embedding projector configured with these uploaded embeddings at this link. Using Tensorboard For the visualization purpose, the multi-dimensional embeddings that we get from the Doc2Vec model above, needs to be downsized to 2 or 3 dimensions. So that we basically end up with a new 2d or 3d embedding which tries to preserve information from the original multi-dimensional embedding. As these vectors are reduced to a much smaller dimension, the exact cosine/euclidean distances between them are not preserved, but rather relative, and hence as you’ll see below the nearest similarity results may change. TensorBoard has two popular dimensionality reduction methods for visualizing the embeddings and also provides a custom method based on text searches: Principal Component Analysis: PCA aims at exploring the global structure in data, and could end up losing the local similarities between neighbours. It maximizes the total variance in the lower dimensional subspace and hence, often preserves the larger pairwise distances better than the smaller ones. See an intuition behind it in this nicely explained answer on stackexchange. T-SNE: The idea of T-SNE is to place the local neighbours close to each other, and almost completely ignoring the global structure. It is useful for exploring local neighborhoods and finding local clusters. But the global trends are not represented accurately and the separation between different groups is often not preserved (see the t-sne plots of our data below which testify the same). Custom Projections: This is a custom bethod based on the text searches you define for different directions. It could be useful for finding meaningful directions in the vector space, for example, female to male, currency to country etc. You can refer to this doc for instructions on how to use and navigate through different panels available in TensorBoard. Visualize using PCA The Embedding Projector computes the top 10 principal components. The menu at the left panel lets you project those components onto any combination of two or three. <img src="pca.png"> The above plot was made using the first two principal components with total variance covered being 36.5%. Visualize using T-SNE Data is visualized by animating through every iteration of the t-sne algorithm. The t-sne menu at the left lets you adjust the value of it's two hyperparameters. The first one is Perplexity, which is basically a measure of information. It may be viewed as a knob that sets the number of effective nearest neighbors<sup>[2]</sup>. The second one is learning rate that defines how quickly an algorithm learns on encountering new examples/data points. <img src="tsne.png"> The above plot was generated with perplexity 8, learning rate 10 and iteration 500. Though the results could vary on successive runs, and you may not get the exact plot as above with same hyperparameter settings. But some small clusters will start forming as above, with different orientations. 2. Visualizing LDA In this part, we will see how to visualize LDA in Tensorboard. We will be using the Document-topic distribution as the embedding vector of a document. Basically, we treat topics as the dimensions and the value in each dimension represents the topic proportion of that topic in the document. Preprocess Text We use the movie Plots as our documents in corpus and remove rare words and common words based on their document frequency. Below we remove words that appear in less than 2 documents or in more than 30% of the documents. End of explanation """ # Set training parameters. num_topics = 10 chunksize = 2000 passes = 50 iterations = 200 eval_every = None # Train model model = ldamodel.LdaModel(corpus=corpus, id2word=dictionary, chunksize=chunksize, alpha='auto', eta='auto', iterations=iterations, num_topics=num_topics, passes=passes, eval_every=eval_every) """ Explanation: Train LDA Model End of explanation """ # Get document topics all_topics = model.get_document_topics(corpus, minimum_probability=0) all_topics[0] """ Explanation: You can refer to this notebook also before training the LDA model. It contains tips and suggestions for pre-processing the text data, and how to train the LDA model to get good results. Doc-Topic distribution Now we will use get_document_topics which infers the topic distribution of a document. It basically returns a list of (topic_id, topic_probability) for each document in the input corpus. End of explanation """ # create file for tensors with open('doc_lda_tensor.tsv','w') as w: for doc_topics in all_topics: for topics in doc_topics: w.write(str(topics[1])+ "\t") w.write("\n") # create file for metadata with open('doc_lda_metadata.tsv','w') as w: w.write('Titles\tGenres\n') for j, k in zip(dataframe.Titles, dataframe.Genres): w.write("%s\t%s\n" % (j, k)) """ Explanation: The above output shows the topic distribution of first document in the corpus as a list of (topic_id, topic_probability). Now, using the topic distribution of a document as it's vector embedding, we will plot all the documents in our corpus using Tensorboard. Prepare the Input files for Tensorboard Tensorboard takes two input files, one containing the embedding vectors and the other containing relevant metadata. As described above we will use the topic distribution of documents as their embedding vector. Metadata file will consist of Movie titles with their genres. End of explanation """ tensors = [] for doc_topics in all_topics: doc_tensor = [] for topic in doc_topics: if round(topic[1], 3) > 0: doc_tensor.append((topic[0], float(round(topic[1], 3)))) # sort topics according to highest probabilities doc_tensor = sorted(doc_tensor, key=lambda x: x[1], reverse=True) # store vectors to add in metadata file tensors.append(doc_tensor[:5]) # overwrite metadata file i=0 with open('doc_lda_metadata.tsv','w') as w: w.write('Titles\tGenres\n') for j,k in zip(dataframe.Titles, dataframe.Genres): w.write("%s\t%s\n" % (''.join((str(j), str(tensors[i]))),k)) i+=1 """ Explanation: Now you can go to http://projector.tensorflow.org/ and upload these two files by clicking on Load data in the left panel. For demo purposes I have uploaded the LDA doc-topic embeddings generated from the model trained above here. You can also access the Embedding projector configured with these uploaded embeddings at this link. Visualize using PCA The Embedding Projector computes the top 10 principal components. The menu at the left panel lets you project those components onto any combination of two or three. <img src="doc_lda_pca.png"> From PCA, we get a simplex (tetrahedron in this case) where each data point represent a document. These data points are colored according to their Genres which were given in the Movie dataset. As we can see there are a lot of points which cluster at the corners of the simplex. This is primarily due to the sparsity of vectors we are using. The documents at the corners primarily belongs to a single topic (hence, large weight in a single dimension and other dimensions have approximately zero weight.) You can modify the metadata file as explained below to see the dimension weights along with the Movie title. Now, we will append the topics with highest probability (topic_id, topic_probability) to the document's title, in order to explore what topics do the cluster corners or edges dominantly belong to. For this, we just need to overwrite the metadata file as below: End of explanation """ model.show_topic(topicid=0, topn=15) """ Explanation: Next, we upload the previous tensor file "doc_lda_tensor.tsv" and this new metadata file to http://projector.tensorflow.org/ . <img src="topic_with_coordinate.png"> Voila! Now we can click on any point to see it's top topics with their probabilty in that document, along with the title. As we can see in the above example, "Beverly hill cops" primarily belongs to the 0th and 1st topic as they have the highest probability amongst all. Visualize using T-SNE In T-SNE, the data is visualized by animating through every iteration of the t-sne algorithm. The t-sne menu at the left lets you adjust the value of it's two hyperparameters. The first one is Perplexity, which is basically a measure of information. It may be viewed as a knob that sets the number of effective nearest neighbors[2]. The second one is learning rate that defines how quickly an algorithm learns on encountering new examples/data points. Now, as the topic distribution of a document is used as it’s embedding vector, t-sne ends up forming clusters of documents belonging to same topics. In order to understand and interpret about the theme of those topics, we can use show_topic() to explore the terms that the topics consisted of. <img src="doc_lda_tsne.png"> The above plot was generated with perplexity 11, learning rate 10 and iteration 1100. Though the results could vary on successive runs, and you may not get the exact plot as above even with same hyperparameter settings. But some small clusters will start forming as above, with different orientations. I named some clusters above based on the genre of it's movies and also using the show_topic() to see relevant terms of the topic which was most prevelant in a cluster. Most of the clusters had doocumets belonging dominantly to a single topic. For ex. The cluster with movies belonging primarily to topic 0 could be named Fantasy/Romance based on terms displayed below for topic 0. You can play with the visualization yourself on this link and try to conclude a label for clusters based on movies it has and their dominant topic. You can see the top 5 topics of every point by hovering over it. Now, we can notice that their are more than 10 clusters in the above image, whereas we trained our model for num_topics=10. It's because their are few clusters, which has documents belonging to more than one topic with an approximately close topic probability values. End of explanation """ import pyLDAvis.gensim viz = pyLDAvis.gensim.prepare(model, corpus, dictionary) pyLDAvis.display(viz) """ Explanation: You can even use pyLDAvis to deduce topics more efficiently. It provides a deeper inspection of the terms highly associated with each individual topic. For this, it uses a measure called relevance of a term to a topic that allows users to flexibly rank terms best suited for a meaningful topic interpretation. It's weight parameter called λ can be adjusted to display useful terms which could help in differentiating topics efficiently. End of explanation """
jbannister/Stanford
Latent_Me.ipynb
mit
!git clone https://github.com/Puzer/stylegan %cd stylegan # Use the version this notebook was built with !git checkout c3fb250c65840c8837ded78e34485227755c2473 !mkdir raw_images aligned_images generated_images latent_representations """ Explanation: <a href="https://colab.research.google.com/github/jbannister/Stanford/blob/master/Latent_Me.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Save a Copy (File -> Save a copy in Drive) to run Finding yourself in the latent space of StyleGAN Dmitry Nikitko (puzer) wrote the original code for this notebook which extends the work released by NVidia on StyleGAN. I have modified and annotated it to make it easier to use in Colab. After running through this notebook you should have a StyleGAN generated image (or set of images) which closely match photos of you that you've uploaded. You'll also have a npy file which is the latent code (or location) of your image in the StyleGAN latent space. End of explanation """ # e.g. mv ../me.jpg raw_images/ !mv ../<YOUR UPLOAD NAME> raw_images/ """ Explanation: Add your image(s) Upload your image using the Sidebar now. To open the sidebar select "View" and then "Table of Contents". The sidebar should now be open. Click the "Files" tab. Before uploading make sure: The image(s) you're using can be opened by PIL (jpg, png, etc) The images are larger than 1024x1024. Preferably significantly larger so the aligner can crop out a high resolution section of the image containing your face. Your face in the image is well lit and facing the camera (for best results) Click ''Upload" in the sidebar and select the images you want to upload from your computer. Note: All files uploaded in this manner end up in the root of the file tree. We'll move them into the correct spot next. End of explanation """ !python align_images.py raw_images aligned_images """ Explanation: Align your images End of explanation """ !python encode_images.py aligned_images/ generated_images/ latent_representations/ --iterations 1000 """ Explanation: This should produce an image in aligned_images/ for every image in raw_images/. It's a good idea to check that this process worked by using the Files browser to download each aligned image and make sure it looks reasonable. If you encounter scrambled images it might be because your original raw images are too small. Search for your latent self The script encode_images.py will minimize the perceptual loss between generated images from StyleGAN and each of the images you've uploaded. (By default this happens one at a time) I've had good results at 1000 iterations and it's best to check the general quality before coming back and ramping up the number of iterations to produce a high-quality latent. Higher quality comes at a cost of course. 10000 iterations will take about one hour for one image. NOTE: You may get a warning about the GPU memory limit when running this script. Don't worry it will still complete. End of explanation """ import os import pickle import PIL.Image import numpy as np import dnnlib import dnnlib.tflib as tflib import config from encoder.generator_model import Generator import matplotlib.pyplot as plt %matplotlib inline URL_FFHQ = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' tflib.init_tf() with dnnlib.util.open_url(URL_FFHQ, cache_dir=config.cache_dir) as f: generator_network, discriminator_network, Gs_network = pickle.load(f) generator = Generator(Gs_network, batch_size=1, randomize_noise=False) def generate_image(latent_vector): latent_vector = latent_vector.reshape((1, 18, 512)) generator.set_dlatents(latent_vector) img_array = generator.generate_images()[0] img = PIL.Image.fromarray(img_array, 'RGB') return img.resize((256, 256)) def move_and_show(latent_vector, direction, coeffs): fig,ax = plt.subplots(1, len(coeffs), figsize=(15, 10), dpi=80) for i, coeff in enumerate(coeffs): new_latent_vector = latent_vector.copy() new_latent_vector[:8] = (latent_vector + coeff*direction)[:8] ax[i].imshow(generate_image(new_latent_vector)) ax[i].set_title('Coeff: %0.1f' % coeff) [x.axis('off') for x in ax] plt.show() # Loading already learned representations me = np.load('latent_representations/<YOUR FILENAME>.npy') # Loading already learned latent directions smile_direction = np.load('ffhq_dataset/latent_directions/smile.npy') gender_direction = np.load('ffhq_dataset/latent_directions/gender.npy') age_direction = np.load('ffhq_dataset/latent_directions/age.npy') # In general it's possible to find directions of almost any face attributes: position, hair style or color ... # Additional scripts for doing so will be realised soon """ Explanation: Download Your Results After the above cell has finished writing there should be an image in generated_images/ for each image in aligned_images/. You can right-click and download each of these images to see your final latent self. Latent Representation You can also download the npy files in the latent_representations/ directory. Each of those is a serialized numpy array which contains the (18, 512) array encoding the point in latent space which corresponds to the generated image. Which you can open with latent = np.load('filename.npy') Change your Smile, Gender, or Age Once your latent representation has been generated and saved you can explore the volume around it through latent vectors. Puzer has provided vectors for Smile, Gender and Age so you can see what you look like as your latent self varies along those axes. Run the following cells. End of explanation """ move_and_show(me, smile_direction, [-0.5, 0, 0.5]) """ Explanation: Smile transformation End of explanation """ move_and_show(me, gender_direction, [-1, 0, 1]) """ Explanation: Gender transformation End of explanation """ move_and_show(me, age_direction, [-1, 0, 1]) """ Explanation: Age transformation End of explanation """
ZoranPandovski/al-go-rithms
machine_learning/python/gradient boosted tree regressor/GBDTRegressor.ipynb
cc0-1.0
import numpy as np from sklearn.tree import DecisionTreeRegressor """ Explanation: Simple Implementation of Gradient Boosted Decision Tree For Regression for this implementation we use squared loss divided by 2 as loss function for GBDT $$L(y^{true}, y^{pred}) = \frac{1}{2} (y^{true} - y^{pred})^2 $$ so that our loss function gradient is $$y^{true} - y^{pred}$$ we will use this to compute residual for the sake of example we use sklearn DecisionTreeRegressor as our tree End of explanation """ class GradientBoostedDecisionTreeRegressor: def __init__(self, model_used, model_param={}, learning_rate=1e-4, n_trees=10): # the tree class that will be used self.model_used = model_used # the tree class parameter self.model_param = model_param # learning rate for our GBDT self.learning_rate = learning_rate # number of trees in our model self.n_trees = n_trees self.trees = [] self.initial_prediction = None def fit(self, X, y, verbose=False): # because we're using squared loss divided by 2 our initial prediction will be our label mean self.initial_prediction = np.mean(y) last_y_pred = np.zeros(y.shape[0])+ self.initial_prediction # for every iteration we create new tree and update residual and predicted for training set for i in range(self.n_trees): # the residual is the true value - our prediction, we want this to be 0 residual = y - last_y_pred if verbose: print('iteration num {}, mean residual {}'.format(i+1, np.mean(residual))) # we train new tree on residual instead of y dt = self.model_used(**self.model_param) dt.fit(X, residual) self.trees.append(dt) last_y_pred = last_y_pred + self.learning_rate * dt.predict(X) def predict(self, X): last_y_pred = np.zeros(X.shape[0])+ self.initial_prediction # for every tree update prediction by adding it to the last predicted value of last tree for i in range(self.n_trees): last_y_pred = last_y_pred + self.learning_rate * self.trees[i].predict(X) return last_y_pred """ Explanation: Implementation End of explanation """ import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from sklearn.utils import shuffle from sklearn.metrics import mean_squared_error boston = datasets.load_boston() X, y = shuffle(boston.data, boston.target, random_state=13) X = X.astype(np.float32) offset = int(X.shape[0] * 0.8) X_train, y_train = X[:offset], y[:offset] X_test, y_test = X[offset:], y[offset:] # ############################################################################# # Fit regression model regressor = GradientBoostedDecisionTreeRegressor(DecisionTreeRegressor, {'max_depth':3}, n_trees=500, learning_rate=1e-2) regressor.fit(X_train, y_train, verbose=False) mse_train = mean_squared_error(y_train, regressor.predict(X_train)) mse_test = mean_squared_error(y_test, regressor.predict(X_test)) print("Train MSE: %.4f" % mse_train) print("Test MSE: %.4f" % mse_test) """ Explanation: usage example End of explanation """
RTHMaK/RPGOne
scipy-2017-sklearn-master/notebooks/04 Training and Testing Data.ipynb
apache-2.0
from sklearn.datasets import load_iris from sklearn.neighbors import KNeighborsClassifier iris = load_iris() X, y = iris.data, iris.target classifier = KNeighborsClassifier() """ Explanation: SciPy 2016 Scikit-learn Tutorial Training and Testing Data To evaluate how well our supervised models generalize, we can split our data into a training and a test set: <img src="figures/train_test_split_matrix.svg" width="100%"> End of explanation """ y """ Explanation: Thinking about how machine learning is normally performed, the idea of a train/test split makes sense. Real world systems train on the data they have, and as other data comes in (from customers, sensors, or other sources) the classifier that was trained must predict on fundamentally new data. We can simulate this during training using a train/test split - the test data is a simulation of "future data" which will come into the system during production. Specifically for iris, the 150 labels in iris are sorted, which means that if we split the data using a proportional split, this will result in fudamentally altered class distributions. For instance, if we'd perform a common 2/3 training data and 1/3 test data split, our training dataset will only consists of flower classes 0 and 1 (Setosa and Versicolor), and our test set will only contain samples with class label 2 (Virginica flowers). Under the assumption that all samples are independent of each other (in contrast time series data), we want to randomly shuffle the dataset before we split the dataset as illustrated above. End of explanation """ from sklearn.model_selection import train_test_split train_X, test_X, train_y, test_y = train_test_split(X, y, train_size=0.5, random_state=123) print("Labels for training and testing data") print(train_y) print(test_y) """ Explanation: Now we need to split the data into training and testing. Luckily, this is a common pattern in machine learning and scikit-learn has a pre-built function to split data into training and testing sets for you. Here, we use 50% of the data as training, and 50% testing. 80% and 20% is another common split, but there are no hard and fast rules. The most important thing is to fairly evaluate your system on data it has not seen during training! End of explanation """ print('All:', np.bincount(y) / float(len(y)) * 100.0) print('Training:', np.bincount(train_y) / float(len(train_y)) * 100.0) print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0) """ Explanation: Tip: Stratified Split Especially for relatively small datasets, it's better to stratify the split. Stratification means that we maintain the original class proportion of the dataset in the test and training sets. For example, after we randomly split the dataset as shown in the previous code example, we have the following class proportions in percent: End of explanation """ train_X, test_X, train_y, test_y = train_test_split(X, y, train_size=0.5, random_state=123, stratify=y) print('All:', np.bincount(y) / float(len(y)) * 100.0) print('Training:', np.bincount(train_y) / float(len(train_y)) * 100.0) print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0) """ Explanation: So, in order to stratify the split, we can pass the label array as an additional option to the train_test_split function: End of explanation """ classifier.fit(train_X, train_y) pred_y = classifier.predict(test_X) print("Fraction Correct [Accuracy]:") print(np.sum(pred_y == test_y) / float(len(test_y))) """ Explanation: By evaluating our classifier performance on data that has been seen during training, we could get false confidence in the predictive power of our model. In the worst case, it may simply memorize the training samples but completely fails classifying new, similar samples -- we really don't want to put such a system into production! Instead of using the same dataset for training and testing (this is called "resubstitution evaluation"), it is much much better to use a train/test split in order to estimate how well your trained model is doing on new data. End of explanation """ print('Samples correctly classified:') correct_idx = np.where(pred_y == test_y)[0] print(correct_idx) print('\nSamples incorrectly classified:') incorrect_idx = np.where(pred_y != test_y)[0] print(incorrect_idx) # Plot two dimensions colors = ["darkblue", "darkgreen", "gray"] for n, color in enumerate(colors): idx = np.where(test_y == n)[0] plt.scatter(test_X[idx, 1], test_X[idx, 2], color=color, label="Class %s" % str(n)) plt.scatter(test_X[incorrect_idx, 1], test_X[incorrect_idx, 2], color="darkred") plt.xlabel('sepal width [cm]') plt.ylabel('petal length [cm]') plt.legend(loc=3) plt.title("Iris Classification results") plt.show() """ Explanation: We can also visualize the correct and failed predictions End of explanation """ # %load solutions/04_wrong-predictions.py """ Explanation: We can see that the errors occur in the area where green (class 1) and gray (class 2) overlap. This gives us insight about what features to add - any feature which helps separate class 1 and class 2 should improve classifier performance. Exercise Print the true labels of 3 wrong predictions and modify the scatterplot code, which we used above, to visualize and distinguish these three samples with different markers in the 2D scatterplot. Can you explain why our classifier made these wrong predictions? End of explanation """
zzsza/Datascience_School
15. 선형 회귀 분석/02. 선형 회귀 분석의 기초.ipynb
mit
import numpy as np import matplotlib.pyplot as plt import pandas as pd %matplotlib inline from sklearn.datasets import make_regression bias = 100 X0, y, coef = make_regression(n_samples=100, n_features=1, bias=bias, noise=10, coef=True, random_state=1) X = np.hstack([np.ones_like(X0), X0]) np.ones_like(X0)[:5] # no.ones_like(X0) : X0 사이즈와 동일한데 내용물은 1인 행렬 생성 X[:5] """ Explanation: 선형 회귀 분석의 기초 회귀 분석(regression analysis)은 입력 자료(독립 변수) $x$와 이에 대응하는 출력 자료(종속 변수) $y$간의 관계를 정량과 하기 위한 작업이다. 회귀 분석에는 결정론적 모형(Deterministic Model)과 확률적 모형(Probabilistic Model)이 있다. 결정론적 모형은 단순히 독립 변수 $x$에 대해 대응하는 종속 변수 $y$를 계산하는 함수를 만드는 과정이다. $$ \hat{y} = f \left( x; { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } \right) = f (x; D) = f(x) $$ 여기에서 $ { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } $ 는 모형 계수 추정을 위한 과거 자료이다. 만약 함수가 선형 함수이면 선형 회귀 분석(linear regression analysis)이라고 한다. $$ \hat{y} = w_0 + w_1 x_1 + w_2 x_2 + \cdots + w_D x_D $$ Augmentation 일반적으로 회귀 분석에 앞서 다음과 같이 상수항을 독립 변수에 포함하는 작업이 필요할 수 있다. 이를 feature augmentation이라고 한다. $$ x_i = \begin{bmatrix} x_{i1} \ x_{i2} \ \vdots \ x_{iD} \end{bmatrix} \rightarrow x_{i,a} = \begin{bmatrix} 1 \ x_{i1} \ x_{i2} \ \vdots \ x_{iD} \end{bmatrix} $$ augmentation을 하게 되면 모든 원소가 1인 벡터를 feature matrix 에 추가된다. $$ X = \begin{bmatrix} x_{11} & x_{12} & \cdots & x_{1D} \ x_{21} & x_{22} & \cdots & x_{2D} \ \vdots & \vdots & \vdots & \vdots \ x_{N1} & x_{N2} & \cdots & x_{ND} \ \end{bmatrix} \rightarrow X_a = \begin{bmatrix} 1 & x_{11} & x_{12} & \cdots & x_{1D} \ 1 & x_{21} & x_{22} & \cdots & x_{2D} \ \vdots & \vdots & \vdots & \vdots & \vdots \ 1 & x_{N1} & x_{N2} & \cdots & x_{ND} \ \end{bmatrix} $$ augmentation을 하면 가중치 벡터(weight vector)도 차원이 증가하여 전체 수식이 다음과 같이 단순화 된다. $$ w_0 + w_1 x_1 + w_2 x_2 \begin{bmatrix} 1 & x_1 & x_2 \end{bmatrix} \begin{bmatrix} w_0 \ w_1 \ w_2 \end{bmatrix} = x_a^T w $$ End of explanation """ y = y.reshape(len(y), 1) w = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X)), X.T), y) print("bias:", bias) print("coef:", coef) print("w:\n", w) w = np.linalg.lstsq(X, y)[0] w xx = np.linspace(np.min(X0) - 1, np.max(X0) + 1, 1000) XX = np.vstack([np.ones(xx.shape[0]), xx.T]).T yy = np.dot(XX, w) plt.scatter(X0, y) plt.plot(xx, yy, 'r-') plt.show() """ Explanation: OLS (Ordinary Least Squares) OLS는 가장 기본적인 결정론적 회귀 방법으로 Residual Sum of Squares(RSS)를 최소화하는 가중치 벡터 값을 미분을 통해 구한다. Residual 잔차 $$ e_i = {y}_i - x_i^T w $$ Stacking (Vector Form) $$ e = {y} - Xw $$ Residual Sum of Squares (RSS) $$\begin{eqnarray} \text{RSS} &=& \sum (y_i - \hat{y}_i)^2 \ &=& \sum e_i^2 = e^Te \ &=& (y - Xw)^T(y - Xw) \ &=& y^Ty - 2y^T X w + w^TX^TXw \end{eqnarray}$$ Minimize using Gradient $$ \dfrac{\partial \text{RSS}}{\partial w} = -2 X^T y + 2 X^TX w = 0 $$ $$ X^TX w = X^T y $$ $$ w = (X^TX)^{-1} X^T y $$ 여기에서 그레디언트를 나타내는 다음 식을 Normal equation 이라고 한다. $$ X^T y - X^TX w = 0 $$ Normal equation 에서 잔차에 대한 다음 특성을 알 수 있다. $$ X^T (y - X w ) = X^T e = 0 $$ End of explanation """ from sklearn.datasets import load_diabetes diabetes = load_diabetes() dfX_diabetes = pd.DataFrame(diabetes.data, columns=["X%d" % (i+1) for i in range(np.shape(diabetes.data)[1])]) dfy_diabetes = pd.DataFrame(diabetes.target, columns=["target"]) df_diabetes0 = pd.concat([dfX_diabetes, dfy_diabetes], axis=1) df_diabetes0.tail() from sklearn.linear_model import LinearRegression model_diabetes = LinearRegression().fit(diabetes.data, diabetes.target) print(model_diabetes.coef_) print(model_diabetes.intercept_) predictions = model_diabetes.predict(diabetes.data) plt.scatter(diabetes.target, predictions) plt.xlabel("target") plt.ylabel("prediction") plt.show() mean_abs_error = (np.abs(((diabetes.target - predictions)/diabetes.target)*100)).mean() print("MAE: %.2f%%" % (mean_abs_error)) import sklearn as sk sk.metrics.median_absolute_error(diabetes.target, predictions) sk.metrics.mean_squared_error(diabetes.target, predictions) """ Explanation: scikit-learn 패키지를 사용한 선형 회귀 분석 sklearn 패키지를 사용하여 선형 회귀 분석을 하는 경우에는 linear_model 서브 패키지의 LinearRegression 클래스를 사용한다. http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html 입력 인수 fit_intercept : 불리언, 옵션 상수상 추가 여부 normalize : 불리언, 옵션 회귀 분석전에 정규화 여부 속성 coef_ : 추정된 가중치 벡터 intercept_ : 추정된 상수항 Diabetes Regression End of explanation """ from sklearn.datasets import load_boston boston = load_boston() dfX_boston = pd.DataFrame(boston.data, columns=boston.feature_names) dfy_boston = pd.DataFrame(boston.target, columns=["MEDV"]) df_boston0 = pd.concat([dfX_boston, dfy_boston], axis=1) df_boston0.tail() model_boston = LinearRegression().fit(boston.data, boston.target) print(model_boston.coef_) print(model_boston.intercept_) predictions = model_boston.predict(boston.data) plt.scatter(boston.target, predictions) plt.xlabel("target") plt.ylabel("prediction") plt.show() mean_abs_error = (np.abs(((boston.target - predictions)/boston.target)*100)).mean() print("MAE: %.2f%%" % (mean_abs_error)) sk.metrics.median_absolute_error(boston.target, predictions) sk.metrics.mean_squared_error(boston.target, predictions) """ Explanation: Boston Housing Price End of explanation """ df_diabetes = sm.add_constant(df_diabetes0) df_diabetes.tail() model_diabetes2 = sm.OLS(df_diabetes.ix[:, -1], df_diabetes.ix[:, :-1]) result_diabetes2 = model_diabetes2.fit() result_diabetes2 print(result_diabetes2.summary()) df_boston = sm.add_constant(df_boston0) model_boston2 = sm.OLS(df_boston.ix[:, -1], df_boston.ix[:, :-1]) result_boston2 = model_boston2.fit() print(result_boston2.summary()) """ Explanation: statsmodels 를 사용한 선형 회귀 분석 statsmodels 패키지에서는 OLS 클래스를 사용하여 선형 회귀 분석을 실시한다. http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.html statsmodels.regression.linear_model.OLS(endog, exog=None) 입력 인수 endog : 종속 변수. 1차원 배열 exog : 독립 변수, 2차원 배열. statsmodels 의 OLS 클래스는 자동으로 상수항을 만들어주지 않기 때문에 사용자가 add_constant 명령으로 상수항을 추가해야 한다. 모형 객체가 생성되면 fit, predict 메서드를 사용하여 추정 및 예측을 실시한다. 예측 결과는 RegressionResults 클래스 객체로 출력되면 summary 메서드로 결과 보고서를 볼 수 있다. End of explanation """ dir(result_boston2) """ Explanation: RegressionResults 클래스는 분석 결과를 다양한 속성에 저장해주므로 추후 사용자가 선택하여 활용할 수 있다. End of explanation """ sm.graphics.plot_fit(result_boston2, "CRIM") plt.show() """ Explanation: statsmodel는 다양한 회귀 분석 결과 플롯도 제공한다. plot_fit(results, exog_idx) Plot fit against one regressor. abline_plot([intercept, ...]) Plots a line given an intercept and slope. influence_plot(results[, ...]) Plot of influence in regression. plot_leverage_resid2(results) Plots leverage statistics vs. plot_partregress(endog, ...) Plot partial regression for a single regressor. plot_ccpr(results, exog_idx) Plot CCPR against one regressor. plot_regress_exog(results, ...) Plot regression results against one regressor. End of explanation """
molpopgen/fwdpy
docs/pages/popsizes.ipynb
gpl-3.0
%matplotlib inline %pylab inline from __future__ import print_function import numpy as np import array import matplotlib.pyplot as plt #population size N=1000 #nlist corresponds to a constant population size for 10N generations #note the "dtype" argument. Without it, we'd be defaulting to int64, #which is a 64-bit signed integer. nlist=np.array([N]*(10*N),dtype=np.uint32) #This is a 'view' of the array starting from the beginning: nlist[0:] """ Explanation: Example: modeling changes in population size Simple example Let's look at an example: End of explanation """ #Evolve for 10N generations, #bottleneck to 0.25N for 100 generations, #recover to N for 50 generations nlist = np.concatenate(([N]*(10*N),[int(0.25*N)]*100,[N]*50)).astype(np.int32) plt.plot(nlist[0:]) plt.ylim(0,1.5*N) """ Explanation: A simple bottleneck In order to change population size, one simply has to change the values in the "nlist". For example, here is a population bottleneck: End of explanation """ import math N2=5*N tgrowth=500 #G is the growth rate G = math.exp( (math.log(N2)-math.log(N))/float(tgrowth) ) nlist = np.array([N]*(10*N+tgrowth),dtype=np.uint32) #Now, modify the list according to expoential growth rate for i in range(tgrowth): nlist[10*N+i] = round( N*math.pow(G,i+1) ) ##Now, we see that the population does grown from ##N=1,000 to N=5,000 during the last 500 generations ## We need the + 1 below to transform ## from the generation's index to the generation itself plt.plot(range(10*N+1,10*N+501,1),nlist[10*N:]) """ Explanation: Please note the last command, which changes the concatenated array from an array of 64 bit signed integers to 32 bit unsigned integers. Exponential growth Now, let's do population growth, where we evolve for 10N generations, and then grow the population five fold in the next 500 generations. End of explanation """
mne-tools/mne-tools.github.io
0.19/_downloads/2369809188e1e28fb4d0ad564cdfa36d/plot_source_space_time_frequency.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD (3-clause) import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import sample from mne.minimum_norm import read_inverse_operator, source_band_induced_power print(__doc__) """ Explanation: Compute induced power in the source space with dSPM Returns STC files ie source estimates of induced power for different bands in the source space. The inverse method is linear based on dSPM inverse operator. End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' tmin, tmax, event_id = -0.2, 0.5, 1 # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.find_events(raw, stim_channel='STI 014') inverse_operator = read_inverse_operator(fname_inv) include = [] raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more # picks MEG gradiometers picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True, stim=False, include=include, exclude='bads') # Load condition 1 event_id = 1 events = events[:10] # take 10 events to keep the computation time low # Use linear detrend to reduce any edge artifacts epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6), preload=True, detrend=1) # Compute a source estimate per frequency band bands = dict(alpha=[9, 11], beta=[18, 22]) stcs = source_band_induced_power(epochs, inverse_operator, bands, n_cycles=2, use_fft=False, n_jobs=1) for b, stc in stcs.items(): stc.save('induced_power_%s' % b) """ Explanation: Set parameters End of explanation """ plt.plot(stcs['alpha'].times, stcs['alpha'].data.mean(axis=0), label='Alpha') plt.plot(stcs['beta'].times, stcs['beta'].data.mean(axis=0), label='Beta') plt.xlabel('Time (ms)') plt.ylabel('Power') plt.legend() plt.title('Mean source induced power') plt.show() """ Explanation: plot mean power End of explanation """
pybel/pybel-tools
notebooks/Directed, Polar Heat Diffusion.ipynb
mit
import random import sys import time from abc import ABC, abstractmethod from collections import defaultdict from dataclasses import dataclass from itertools import product from typing import Optional import matplotlib as mpl import matplotlib.pyplot as plt import networkx as nx import numpy as np import pandas as pd from IPython.display import Markdown from sklearn.preprocessing import normalize from tqdm import tqdm_notebook as tqdm %matplotlib inline mpl.rcParams['figure.figsize'] = [8.0, 3.0] print(time.asctime()) print(sys.version) # My favorite seed np.random.seed(127) random.seed(127) def draw(graph): edges = graph.edges() pos = nx.spring_layout(graph) colors = [] for (u,v,attrib_dict) in list(graph.edges.data()): colors.append('blue' if attrib_dict['weight'] == 1 else 'red') nx.draw(graph, pos=pos, edges=edges, edge_color=colors, node_size=60) def assign_bernoulli_polarity(graph, p:float = 0.5) -> None: """Bigger probability means more positive edges.""" for u, v, k in graph.edges(keys=True): graph.edges[u, v, k]['weight'] = 1 if random.random() < p else -1 # Insulation parameters to check alphas = ( 0.1, 0.01, 0.001, ) n_subplots_x = 3 n_subplots_y = int((1 + len(alphas)) / n_subplots_x) n_subplots_x, n_subplots_y """ Explanation: Directed, Polar Heat Diffusion End of explanation """ class BaseDiffuser(ABC): def __init__(self, graph: nx.DiGraph, alpha: float, steps: Optional[int] = None) -> None: self.alpha = alpha self.deltas = [] self.heats = [] self.steps = steps or int(30 / self.alpha) self.weights = self.calculate_weights(graph) @staticmethod @abstractmethod def calculate_weights(graph): raise NotImplementedError @abstractmethod def run(self, heat, tqdm_kwargs=None) -> None: raise NotImplementedError def _plot_diffusion_title(self): return f'Diffusion ($\\alpha={self.alpha}$)' def plot(self, heat_plt_kwargs=None, deriv_plt_kwargs=None) -> None: fig, (lax, rax) = plt.subplots(1, 2) lax.set_title(self._plot_diffusion_title()) lax.set_ylabel('Heat') lax.set_xlabel('Time') pd.DataFrame(self.heats).plot.line(ax=lax, logx=True, **(heat_plt_kwargs or {})) rax.set_title('Derivative of Sum of Absolute Heats') rax.set_ylabel('Change in Sum of Absolute Heats') rax.set_xlabel('Time') derivative = [ (x2 - x1) for x1, x2 in zip(self.deltas, self.deltas[1:]) ] pd.DataFrame(derivative).plot.line(ax=rax, logx=True, legend=False, **(deriv_plt_kwargs or {})) plt.tight_layout(rect=[0, 0, 1, 0.95]) return fig, (lax, rax) @staticmethod def optimize_alpha_multirun(graph, alphas, heat): alpha_heats = {} alpha_deltas = {} for alpha in alphas: diffuser = Diffuser(graph, alpha) diffuser.run(heat) alpha_heats[alpha] = diffuser.heats alpha_deltas[alpha] = diffuser.deltas return alpha_deltas, alpha_heats @classmethod def optimize_alpha_multiplot(cls, graph, alphas, heat, heat_plt_kwargs=None, deriv_plt_kwargs=None): ds, hs = cls.optimize_alpha_multirun(graph, alphas, heat) cls._optimize_alpha_multiplot_helper(hs, plt_kwargs=heat_plt_kwargs) cls._optimize_alpha_multiplot_deriv_helper(ds, plt_kwargs=deriv_plt_kwargs) @staticmethod def _optimize_alpha_multiplot_helper(hs, plt_kwargs=None): fig, axes = plt.subplots(n_subplots_y, n_subplots_x) for alpha, ax in zip(alphas, axes.ravel()): ax.set_title(f'$\\alpha={alpha}$') ax.set_ylabel('Heat') ax.set_xlabel('Time') pd.DataFrame(hs[alpha]).plot.line(ax=ax, logx=True, **(plt_kwargs or {})) plt.suptitle(f'Diffusion ($\\alpha={alpha}$)') plt.tight_layout(rect=[0, 0, 1, 0.95]) @staticmethod def _optimize_alpha_multiplot_deriv_helper(ds, plt_kwargs=None): fig, axes = plt.subplots(n_subplots_y, n_subplots_x) for alpha, ax in zip(ds, axes.ravel()): ax.set_title(f'$\\alpha={alpha}$') ax.set_ylabel('Change in Sum of Heats') ax.set_xlabel('Time') derivative = [ (x2 - x1) for x1, x2 in zip(ds[alpha], ds[alpha][1:]) ] pd.DataFrame(derivative).plot.line(ax=ax, logx=True, legend=False, **(plt_kwargs or {})) plt.suptitle('Derivative of Sum of Absolute Heats') plt.tight_layout(rect=[0, 0, 1, 0.95]) @classmethod def multiplot(cls, graphs_and_heats, alpha): for graph, init_h in graphs_and_heats: d = cls(graph, alpha=alpha) d.run(init_h) fig, axes = d.plot(heat_plt_kwargs=dict(legend=False)) fig.suptitle(graph.name) plt.show() class InsulatedDiffuser(BaseDiffuser): def run(self, heat, tqdm_kwargs=None) -> None: for _ in tqdm(range(self.steps), leave=False, desc=f'alpha: {self.alpha}'): delta = heat @ self.weights self.deltas.append(np.sum(np.abs(delta))) heat = (1 - self.alpha) * heat + self.alpha * delta self.heats.append(heat) class Diffuser(InsulatedDiffuser): @staticmethod def calculate_weights(graph): adj = nx.to_numpy_array(graph) return normalize(adj, norm='l1') """ Explanation: Definitions Definitions: Directed graph $G$ is a defined as: $G = (V, E)$ Where edges $E$ are a subset of pairs of verticies $V$: $E \subseteq V \times V$ Edges $(V_i, V_j) \in E$ are weighted according to weighting function $w$ $w: V \times V \to {-1, 0, 1}$ where edges with positive polarity have weight $w(V_i, V_j) = 1$, negative polarity have weight of $w(V_i, V_j) = -1$, and missing from the graph have $w(V_i, V_j) = 0$. More succinctly, the weights can be represented with weight matrix $W$ defined as $W_{i,j} = w(V_i, V_j)$ Nodes have initial heats represented as vector $h^0 \in \mathbb{R}^{|V|}$ Exploration of Update Strategies Strategy 1: Update with L1 Norm and Insulation Heat flows through the out-edges of $V_i$ divided evenly among its neighbors. This first means that $W$ must be row-wise normalized (the "L1-norm"). It can be redefined as: $W_{i,j} = \frac{w(V_i, V_j)}{\sum_{k=0}^{|V|} w(V_i, V_k)}$ Luckily, sklearn.preprocessing.normalize does the trick. However, only percentage, $\alpha$, of the heat on a given node is allowed to flow at any given step. The remaining percentage of the heat ($1 - \alpha$) stays. Derivations and Musings Heat flows through the out-edges of $V_i$ divided evenly among its neighbors. $\delta_{in}^t(i) = \sum_{j=1}^{|V|} h_j^t W_{j, i} = h^t W_{., i}$ $\delta_{out}^t(i) = \sum_{j=1}^{|V|} h_i^t W_{i, j}$ $\delta^t(i) = \delta_{in}^t(i) - \delta_{out}^t(i)$ Using step size $\alpha$, the new heat at time point $t + 1$ is $h^{t+1}_i = (1 - \alpha) h^t_i + \alpha \delta^t(i)$ Therefore $h^{t+1} = (1 - \alpha) h^t + \alpha \delta^t$ End of explanation """ example_1_graph = nx.DiGraph() example_1_graph.name = 'Example 1 - Small Decreasing Graph' example_1_graph.add_edges_from([ ('A', 'B', dict(weight=-1)), ('A', 'C', dict(weight=-1)), ('B', 'C', dict(weight=+1)), ]) plt.figure(figsize=(3, 3)) draw(example_1_graph) plt.title(f'Visualization of ${example_1_graph}$') plt.show() example_1_init_h = np.array([5.0, 2.0, 2.0]) Diffuser.optimize_alpha_multiplot(example_1_graph, alphas, example_1_init_h) """ Explanation: Example 1 Example 1 is a small system set up to run out of heat defined by the short set of relations A -| B A -| C B -&gt; C with weight matrix $W$ (indexed in alphabetical order): $W=\begin{bmatrix} 0 & -1 & -1 \ 0 & 0 & 1 \ 0 & 0 & 0 \end{bmatrix}$ End of explanation """ example_2_graph = nx.scale_free_graph(n=20, alpha=.31, beta=.64, gamma=.05) example_2_graph.name = 'Example 2 - Random Graph with Even Polarity' assign_bernoulli_polarity(example_2_graph, p=0.5) draw(example_2_graph) example_2_init_h = np.random.normal(size=example_2_graph.number_of_nodes()) Diffuser.optimize_alpha_multiplot(example_2_graph, alphas, example_2_init_h, heat_plt_kwargs=dict(legend=False)) """ Explanation: Example 2 Diffusion on synthetic data. Architecture: directed scale-free with: $n=20$ $\alpha=0.31$ $\beta=0.64$ $\gamma=0.05$ Polarity: bernoulli with: $\rho=0.5$ Initial Heat: normal distribution with: $\mu=0$ $\sigma=1$ End of explanation """ example_3_graph = nx.scale_free_graph(n=20, alpha=.31, beta=.64, gamma=.05) example_3_graph.name = 'Example 3 - Random Graph with Mostly Negative Polarity' assign_bernoulli_polarity(example_3_graph, p=0.3) example_3_init_h = np.random.normal(size=example_3_graph.number_of_nodes()) diffuser = Diffuser(example_3_graph, alpha=0.01) diffuser.run(example_3_init_h) diffuser.plot(heat_plt_kwargs=dict(legend=False)) plt.show() """ Explanation: Example 3 A random graph with more positive edges. End of explanation """ example_4_graph = nx.scale_free_graph(n=20, alpha=.31, beta=.64, gamma=.05) example_4_graph.name = 'Example 4 - Random Graph with Mostly Positive Polarity' assign_bernoulli_polarity(example_4_graph, p=0.7) example_4_init_h = np.random.normal(size=example_4_graph.number_of_nodes()) diffuser = Diffuser(example_4_graph, alpha=0.01) diffuser.run(example_4_init_h) diffuser.plot(heat_plt_kwargs=dict(legend=False)) plt.show() """ Explanation: Example 4 A random graph with more positive edges End of explanation """ example_5_graph = nx.DiGraph() example_5_graph.name = 'Example 5 - Small Increasing Graph' example_5_graph.add_edges_from([ (0, 1, dict(weight=+1)), (0, 2, dict(weight=+1)), (1, 2, dict(weight=+1)), ]) plt.figure(figsize=(3, 3)) draw(example_5_graph) plt.title(f'Visualization of ${example_5_graph}$') plt.show() example_5_init_h = np.random.normal(size=example_5_graph.number_of_nodes()) diffuser = Diffuser(example_5_graph, alpha=0.01) diffuser.run(example_5_init_h) diffuser.plot() plt.show() """ Explanation: Example 5 End of explanation """ example_6_graph = nx.DiGraph() example_6_graph.name = 'Example 6 - Small Chaotic Increasing Graph' example_6_graph.add_edges_from([ (0, 1, dict(weight=+1)), (1, 2, dict(weight=+1)), (2, 0, dict(weight=+1)), ]) plt.figure(figsize=(3, 3)) draw(example_6_graph) plt.title(f'Visualization of ${example_6_graph}$') plt.show() example_6_init_h = np.random.normal(size=example_6_graph.number_of_nodes()) diffuser = Diffuser(example_6_graph, alpha=0.01) diffuser.run(example_6_init_h) diffuser.plot() plt.show() """ Explanation: Example 6 - Chaotic Increasing System End of explanation """ example_graphs = [ (example_1_graph, example_1_init_h), (example_2_graph, example_2_init_h), (example_3_graph, example_3_init_h), (example_4_graph, example_4_init_h), (example_5_graph, example_5_init_h), (example_6_graph, example_6_init_h), ] """ Explanation: This is the first example of a system coming to a non-zero steady state! One of the reasons is any system that has a sink will always hemmorage heat out of the sink. Some ideas on how to deal with this: Scale how much heat that can go into a node based on how much heat it always has (differential equations approach) Self-connect all nodes Self-connect only sink nodes (ones with no out-edges) End of explanation """ class SelfConnectedInsulatedDiffuser(InsulatedDiffuser): """""" def _plot_diffusion_title(self): return f'Self-Connected Insulated Diffusion ($\\alpha={self.alpha}$)' @staticmethod def calculate_weights(graph): adj = nx.to_numpy_array(graph) for i in range(adj.shape[0]): adj[i, i] = 1.0 return normalize(adj, norm='l1') SelfConnectedInsulatedDiffuser.multiplot(example_graphs, alpha=0.01) """ Explanation: Strategy 2: Self-connect nodes All nodes diffuse a bit of heat to themselves, independent of their insulation. This means that the weight matrix gets redefined to have 1's on the diagnal. End of explanation """ class AntiSelfConnectedInsulatedDiffuser(InsulatedDiffuser): """""" def _plot_diffusion_title(self): return f'Self-Connected Insulated Diffusion ($\\alpha={self.alpha}$)' @staticmethod def calculate_weights(graph): adj = nx.to_numpy_array(graph) for i in range(adj.shape[0]): adj[i, i] = -1.0 return normalize(adj, norm='l1') AntiSelfConnectedInsulatedDiffuser.multiplot(example_graphs, alpha=0.01) """ Explanation: Strategy 3: Anti-self connectivity End of explanation """
timothyb0912/pylogit
examples/notebooks/mlogit Benchmark--Train and Fishing.ipynb
bsd-3-clause
from collections import OrderedDict # For recording the model specification import pandas as pd # For file input/output import numpy as np # For vectorized math operations import pylogit as pl # For MNL model estimation # To convert from wide to long format """ Explanation: Mlogit Benchmark 1 The purpose of this notebook is to: <ol> <li> Demonstrate the use of the pyLogit to estimate conditional logit models.</li> <li> Benchmark the results reported pyLogit against those reported by the mlogit package.</li> </ol> The models estimated in this notebook will be as follows: <ol> <li> The "Train" model described on page 22 of the mlogit documentation. <pre> ml.Train <- mlogit(choice ~ price + time + change + comfort | -1, Tr) </pre> </li> <li> The "Fishing" model described on pages 23-24 of the mlogit documentation <pre> ml.Fish <- mlogit(mode ~ price | income | catch, Fishing, shape = "wide", varying = 2:9) </pre> </li> </ol> 1. Import Needed libraries End of explanation """ # Load the Train data, noting that the data is in wide data format wide_train_df = pd.read_csv("../data/train_data_r.csv") # Load the Fishing data, noting that the data is in wide data format wide_fishing_df = pd.read_csv("../data/fishing_data_r.csv") # Look at the raw Train data wide_train_df.head().T # Look at the raw Fishing data wide_fishing_df.head().T """ Explanation: 2. Load and look at the required datasets End of explanation """ # Note that we start the ids for the choice situations at 1. wide_train_df["choice_situation"] = wide_train_df.index.values + 1 wide_fishing_df["observation"] = wide_fishing_df.index.values + 1 """ Explanation: 3. Convert the wide format dataframes to long format 3a. Perform needed data cleaning Recognizing that the Train dataset is a panel dataset, and recognizing that our estimated MNL model will not take the panel nature of the data into account, we need a new id column that specifies each individual choice situation. In a similar fashion, the Fishing data needs an observation id column, even though it is not a panel dataset. All datasets being used in pyLogit need an "observation" id column that denotes the id of what is being thought of as the unit of observation being modeled. End of explanation """ # Convert the choice column for the Train data into integers # Note that we will use a 1 to denote 'choice1' and a 2 to # represent 'choice2' wide_train_df["choice"] = wide_train_df["choice"].map({'choice1': 1, 'choice2': 2}) # Convert the "mode" column for the Fishing data into an # integer based column. Use the following mapping: mode_name_to_id = dict(zip(["beach", "pier", "boat", "charter"], range(1, 5))) wide_fishing_df["mode"] = wide_fishing_df["mode"].map(mode_name_to_id) """ Explanation: Noting that the columns denoting the choice for both the Train and the Fishing data are string objects, we need to convert the choice columns into integer based columns. End of explanation """ # Create the needed availability columns for the Train data # where each choice is a binary decision for i in [1, 2]: wide_train_df["availability_{}".format(i)] = 1 # Create the needed availability columns for the Fishing data # where each choice has four available alternatives for i in range(1, 5): wide_fishing_df["availability_{}".format(i)] = 1 """ Explanation: For both the Train and the Fishing data, all of the alternatives are available in all choice situations. Note that, in general, this is not the case for choice data. As such we need to have columns that denote the availability of each alternative for each individual. These columns will all be filled with ones for each row in the wide format dataframes because all of the alternatives are always available for each individual. End of explanation """ # Look at the columns that we need to account for when converting from # the wide data format to the long data format. wide_train_df.columns ########## # Define lists of the variables pertaining to each variable type # that we need to account for in the data format transformation ########## # Determine the name for the alternative ids in the long format # data frame train_alt_id = "alt_id" # Determine the column that denotes the id of what we're treating # as individual observations, i.e. the choice situations. train_obs_id_col = "choice_situation" # Determine what column denotes the choice that was made train_choice_column = "choice" # Create the list of observation specific variables train_ind_variables = ["id", "choiceid"] # Specify the variables that vary across individuals and some or all alternatives # Note that each "main" key should be the desired name of the column in the long # data format. The inner keys shoud be the alternative ids that that have some # value for the "main" key variable. train_alt_varying_variables = {"price": {1: "price1", 2: "price2"}, "time": {1: "time1", 2: "time2"}, "change": {1: "change1", 2: "change2"}, "comfort": {1: "comfort1", 2: "comfort2"} } # Specify the availability variables train_availability_variables = OrderedDict() for alt_id, var in zip([1, 2], ["availability_1", "availability_2"]): train_availability_variables[alt_id] = var ########## # Actually perform the conversion to long format ########## long_train_df = pl.convert_wide_to_long(wide_data=wide_train_df, ind_vars=train_ind_variables, alt_specific_vars=train_alt_varying_variables, availability_vars=train_availability_variables, obs_id_col=train_obs_id_col, choice_col=train_choice_column, new_alt_id_name=train_alt_id) # Look at the long format train data long_train_df.head() """ Explanation: 3b. Convert the Train dataset to long format End of explanation """ # Look at the columns that we need to account for when converting from # the wide data format to the long data format. wide_fishing_df.columns ########## # Define lists of the variables pertaining to each variable type # that we need to account for in the data format transformation ########## # Determine the name for the alternative ids in the long format # data frame fishing_alt_id = "alt_id" # Determine the column that denotes the id of what we're treating # as individual observations, i.e. the choice situations. fishing_obs_id_col = "observation" # Determine what column denotes the choice that was made fishing_choice_column = "mode" # Create the list of observation specific variables fishing_ind_variables = ["income"] # Specify the variables that vary across individuals and some or all alternatives # Note that each "main" key should be the desired name of the column in the long # data format. The inner keys shoud be the alternative ids that that have some # value for the "main" key variable. fishing_alt_varying_variables = {"price": {1: "price.beach", 2: "price.pier", 3: "price.boat", 4: "price.charter"}, "catch": {1: "catch.beach", 2: "catch.pier", 3: "catch.boat", 4: "catch.charter"}, } # Specify the availability variables fishing_availability_variables = OrderedDict() for alt_id, var in zip(range(1, 5), ["availability_{}".format(x) for x in range(1, 5)]): fishing_availability_variables[alt_id] = var ########## # Actually perform the conversion to long format ########## long_fishing_df = pl.convert_wide_to_long(wide_data=wide_fishing_df, ind_vars=fishing_ind_variables, alt_specific_vars=fishing_alt_varying_variables, availability_vars=fishing_availability_variables, obs_id_col=fishing_obs_id_col, choice_col=fishing_choice_column, new_alt_id_name=fishing_alt_id) # Look at the long format train data long_fishing_df.head() """ Explanation: 3c. Convert the Fishing data to long format End of explanation """ # For the Train data, scale the price and time variables so the units # are meaningful, namely hours and euros. long_train_df["price_euros"] = long_train_df["price"] / 100.0 * 2.20371 long_train_df["time_hours"] = long_train_df["time"] / 60.0 """ Explanation: 4. Create desired variables End of explanation """ # Scale the income data long_fishing_df["income_thousandth"] = long_fishing_df["income"] / 1000.0 """ Explanation: For numeric stability reasons, it is advised that one scale one's variables so that the estimated coefficients are similar in absolute magnitude, and if possible so that the estimated coefficients are close to 1 in absolute value (in other words, not terribly tiny or extremely large). This is done for the fishing data below End of explanation """ # Look at the columns available for use in specifying the model long_train_df.columns # Create the model specification train_spec = OrderedDict() train_names = OrderedDict() # Note that for the specification dictionary, the # keys should be the column names from the long format # dataframe and the values should be a list with a combination # of alternative id's and/or lists of alternative id's. There # should be one element for each beta that will be estimated # in relation to the given column. Lists of alternative id's # mean that all of the alternatives in the list will get a # single beta for them, for the given variable. # The names dictionary should contain one name for each # element (that is each alternative id or list of alternative # ids) in the specification dictionary value for the same # variable for col, display_name in [("price_euros", "price"), ("time_hours", "time"), ("change", "change"), ("comfort", "comfort")]: train_spec[col] = [[1, 2]] train_names[col] = [display_name] # Create an instance of the MNL model class train_model = pl.create_choice_model(data=long_train_df, alt_id_col=train_alt_id, obs_id_col=train_obs_id_col, choice_col=train_choice_column, specification=train_spec, model_type="MNL", names=train_names) # Estimate the given model, starting from a point of all zeros # as the initial values. train_model.fit_mle(np.zeros(4)) # Look at the estimation summaries train_model.get_statsmodels_summary() """ Explanation: 5. Specify and estimate the desired models needed for benchmarking 5a. Train Model Note that this dataset is a stated-preference dataset with unlabeled alternatives. Because the unobserved elements that affect a person's choice are assumed to be the same for both alternatives, the mean of the error terms is expected to be the same for the two alternatives, therefore the alternative specific constants (ASCs) would be the same and the difference between the two ASCs would be zero. Because of this, no ASCs are estimated for the Train model. End of explanation """ # Look at the columns available for use in specifying the model long_fishing_df.columns # Create the model specification fishing_spec = OrderedDict() fishing_names = OrderedDict() # Note that for the specification dictionary, the # keys should be the column names from the long format # dataframe and the values should be a list with a combination # of alternative id's and/or lists of alternative id's. There # should be one element for each beta that will be estimated # in relation to the given column. Lists of alternative id's # mean that all of the alternatives in the list will get a # single beta for them, for the given variable. # The names dictionary should contain one name for each # element (that is each alternative id or list of alternative # ids) in the specification dictionary value for the same # variable # Note the intercept for beach is constrained to zero for identification fishing_spec["intercept"] = list(range(2, 5)) fishing_names["intercept"] = ["ASC: pier", "ASC: boat", "ASC: charter"] fishing_spec["price"] = [[1, 2, 3, 4]] fishing_names["price"] = ["price"] # Note the income coefficient for beach is constrained to zero for identification # Note also that we use the scaled variables because they numerically perform better # fishing_spec["income"] = range(2, 5) # fishing_names["income"] = ["income_{}".format(x) # for x in ["pier", "boat", "charter"]] fishing_spec["income_thousandth"] = list(range(2, 5)) fishing_names["income_thousandth"] = ["income_{} / 1000".format(x) for x in ["pier", "boat", "charter"]] fishing_spec["catch"] = list(range(1, 5)) fishing_names["catch"] = ["catch_{}".format(x) for x in ["beach", "pier", "boat", "charter"]] # Create an instance of the MNL model class fishing_model = pl.create_choice_model(data=long_fishing_df, alt_id_col=fishing_alt_id, obs_id_col=fishing_obs_id_col, choice_col=fishing_choice_column, specification=fishing_spec, model_type="MNL", names=fishing_names) # Estimate the given model, starting from a point of all zeros # as the initial values. fishing_model.fit_mle(np.zeros(11)) # Look at the estimation summaries fishing_model.get_statsmodels_summary() """ Explanation: Look at the corresponding results from mlogit: <pre> Call: mlogit(formula = choice ~ price + time + change + comfort | -1, data = Tr, method = "nr", print.level = 0) Frequencies of alternatives: 12 0.50324 0.49676 nr method 5 iterations, 0h:0m:0s g'(-H)^-1g = 0.00014 successive fonction values within tolerance limits Coefficients : Estimate Std. Error t-value Pr(>|t|) price -0.0673580 0.0033933 -19.8506 < 2.2e-16 *** time -1.7205514 0.1603517 -10.7299 < 2.2e-16 *** change -0.3263409 0.0594892 -5.4857 4.118e-08 *** comfort -0.9457256 0.0649455 -14.5618 < 2.2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Log-Likelihood: -1724.2 </pre> In terms of differences between the estimation output of mlogit and my estimation output, the differencs seem mainly only be with the p-values. My p-values are calculated with respect to an asymptotic normal distribution whereas the p-values of mlogit are based on the t-distribution. This accounts for the p-value difference. There is a very slight difference between mlogits value for the time parameter and my own time parameter estimate, but this may simply be due to the convergance criteria that each of the packages are using. 5b. Fishing model End of explanation """ ########## # Make sure that pyLogit's null log-likelihood # and McFadden's R^2 are correct ########## # Note that every observation in the Fishing dataset # has 4 available alternatives, therefore the null # probability is 0.25 null_prob = 0.25 # Calculate how many observations are in the Fishing # dataset num_fishing_obs = wide_fishing_df.shape[0] # Calculate the Fishing dataset's null log-likelihood null_fishing_log_likelihood = (num_fishing_obs * np.log(null_prob)) # Determine whether pyLogit's null log-likelihood is correct correct_null_ll = np.allclose(null_fishing_log_likelihood, fishing_model.null_log_likelihood, rtol=1e-7) print("pyLogit's null log-likelihood is correct:", correct_null_ll) # Calculate McFadden's R^2 mcfaddens_r2 = 1 - (fishing_model.log_likelihood / fishing_model.null_log_likelihood) print("McFadden's R^2 is {:.5f}".format(mcfaddens_r2)) """ Explanation: Look at the results from mlogit: <pre> Call: mlogit(formula = mode ~ price | income | catch, data = Fishing, shape = "wide", varying = 2:9, method = "nr", print.level = 0) Frequencies of alternatives: beach boat charter pier 0.11337 0.35364 0.38240 0.15059 nr method 7 iterations, 0h:0m:0s g'(-H)^-1g = 2.54E-05 successive function values within tolerance limits Coefficients : Estimate Std. Error t-value Pr(>|t|) boat:(intercept) 8.4184e-01 2.9996e-01 2.8065 0.0050080 ** charter:(intercept) 2.1549e+00 2.9746e-01 7.2443 4.348e-13 *** pier:(intercept) 1.0430e+00 2.9535e-01 3.5315 0.0004132 *** price -2.5281e-02 1.7551e-03 -14.4046 < 2.2e-16 *** boat:income 5.5428e-05 5.2130e-05 1.0633 0.2876612 charter:income -7.2337e-05 5.2557e-05 -1.3764 0.1687088 pier:income -1.3550e-04 5.1172e-05 -2.6480 0.0080977 ** beach:catch 3.1177e+00 7.1305e-01 4.3724 1.229e-05 *** boat:catch 2.5425e+00 5.2274e-01 4.8638 1.152e-06 *** charter:catch 7.5949e-01 1.5420e-01 4.9254 8.417e-07 *** pier:catch 2.8512e+00 7.7464e-01 3.6807 0.0002326 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Log-Likelihood: -1199.1 McFadden R^2: 0.19936 Likelihood ratio test : chisq = 597.16 (p.value = < 2.22e-16) </pre> The coefficient estimates, std. error, and t-values are exactly equal. The p-value differences are, as already noted, because mlogit uses the t-distribution whereas pyLogit uses an asymptotic normal distribution. The log-likelihoods of our final model is also the same. Note that the McFadden R^2 values are different. I am not sure how the mlogit value is calculated. From "Coefficients of Determination for Multiple Logistic Regression Analysis" by Scott Menard (2000), The American Statistician, 54:1, 17-24, we have the following equation: <center> $\textrm{McFadden's R^2} = 1 - \frac{\mathscr{L}_M}{\mathscr{L}_0}$ </center> which evaluates to 0.2681902 not 0.19936. This is provided that my null-log-likelihood is correct. The next cell shows this to be the case and verifies pyLogit's calculated McFadden's R^2. End of explanation """
amorgun/shad-ml-notebooks
notebooks/s1-1/intro.ipynb
unlicense
a = 1 + 2 a a + 1 _ ? sum ! ps -xa | grep python import time %time time.sleep(1) """ Explanation: IPython Python End of explanation """ import numpy as np np.array([[1,2,3], [7,1,2]]) data = np.array([1,2,3,4,5]) data data[1:-2] data + 1 data * 2 data * data np.sum(data * data) data.dot(data) data > 2 data[data > 2] replaces = data[:] replaces[data > 2] = -1 replaces matrix = np.random.rand(5, 4) matrix matrix[1] matrix[:, 1] matrix[(1, 2), :] import random vec_len = 1000 v1 = [random.random() for _ in range(vec_len)] v2 = [random.random() for _ in range(vec_len)] %timeit [a * b for a, b in zip(v1, v2)] np_v1, np_v2 = np.array(v1), np.array(v2) %timeit np_v1 * np_v2 np.arange(1, 5, 0.7) np.linspace(1, 5, 13) first = np.array([[1, 2], [3, 4]]) second = np.array([[10, 20], [30, 40]]) np.hstack((first, second)) np.vstack((first, second)) """ Explanation: Markdown link - ~~1~~ - 2 - a - b - 3 multiline text NumPy End of explanation """ import pandas as pd df = pd.read_csv('iris.csv') df.head(3) df[['Sepal.length', 'Petal.length']].head(5) df[df['Sepal.length'] > 7.5] df.describe() %pylab inline df['Sepal.length'].plot(kind='hist'); """ Explanation: Pandas End of explanation """ import matplotlib.pyplot as plt x = np.arange(1, 10, 0.2) y = np.sin(x) plt.plot(x, y); plt.scatter(x, y); plt.plot(x, y, marker='o'); """ Explanation: Matplotlib End of explanation """
bloomberg/bqplot
examples/Marks/Pyplot/HeatMap.ipynb
apache-2.0
import numpy as np from ipywidgets import Layout import bqplot.pyplot as plt from bqplot import ColorScale """ Explanation: Heatmap The HeatMap mark represents a 2d matrix of values as a color image. It can be used to visualize a 2d function, or a grayscale image for instance. HeatMap is very similar to the GridHeatMap, but should be preferred for a greater number of points (starting at around 100x100), to avoid overloading the browser. GridHeatMap offers more control (interactions, selections), and is better suited for a smaller number of points. End of explanation """ x = np.linspace(-5, 5, 200) y = np.linspace(-5, 5, 200) X, Y = np.meshgrid(x, y) color = np.cos(X ** 2 + Y ** 2) """ Explanation: Data Input x is a 1d array, corresponding to the abscissas of the points (size N) y is a 1d array, corresponding to the ordinates of the points (size M) color is a 2d array, $\text{color}_{ij}$ is the intensity of the point $(x_i, y_j)$ (size (N, M)) Scales must be defined for each attribute: - a LinearScale, LogScale or OrdinalScale for x and y - a ColorScale for color End of explanation """ fig = plt.figure( title="Cosine", layout=Layout(width="650px", height="650px"), min_aspect_ratio=1, max_aspect_ratio=1, padding_y=0, ) heatmap = plt.heatmap(color, x=x, y=y) fig """ Explanation: Plotting a 2-dimensional function This is a visualization of the function $f(x, y) = \text{cos}(x^2+y^2)$ End of explanation """ from scipy.misc import ascent Z = ascent() Z = Z[::-1, :] aspect_ratio = Z.shape[1] / Z.shape[0] img = plt.figure( title="Ascent", layout=Layout(width="650px", height="650px"), min_aspect_ratio=aspect_ratio, max_aspect_ratio=aspect_ratio, padding_y=0, ) plt.scales(scales={"color": ColorScale(scheme="Greys", reverse=True)}) axes_options = { "x": {"visible": False}, "y": {"visible": False}, "color": {"visible": False}, } ascent = plt.heatmap(Z, axes_options=axes_options) img """ Explanation: Displaying an image The HeatMap can be used as is to display a 2d grayscale image, by feeding the matrix of pixel intensities to the color attribute End of explanation """
GoogleCloudPlatform/asl-ml-immersion
notebooks/text_models/solutions/text_generation.ipynb
apache-2.0
import os import time import numpy as np import tensorflow as tf """ Explanation: Text generation with an RNN Learning Objectives Learn how to generate text using a RNN Create training examples and targets for text generation Build a RNN model for sequence generation using Keras Subclassing Create a text generator and evaluate the output This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks. Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly. Below is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q": <pre> QUEENE: I had thought thou hadst a Roman; for the oracle, Thus by All bids the man against the word, Which are so weak of care, by old care done; Your children were in your holy love, And the precipitation through the bleeding throne. BISHOP OF ELY: Marry, and will, my lord, to weep in such a one were prettiest; Yet now I was adopted heir Of the world's lamentable day, To watch the next way with his father with his face? ESCALUS: The cause why then we are all resolved more sons. VOLUMNIA: O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead, And love and pale as any will to that word. QUEEN ELIZABETH: But how long have I heard the soul for this world, And show his hands of life be proved to stand. PETRUCHIO: I say he look'd on, if I must be content To stay him from the fatal of our country's bliss. His lordship pluck'd from this sentence then for prey, And then let us twain, being the moon, were she such a case as fills m </pre> While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but here are some things to consider: The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text. The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset. As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries End of explanation """ path_to_file = tf.keras.utils.get_file( "shakespeare.txt", "https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt", ) """ Explanation: Download the Shakespeare dataset Change the following line to run this code on your own data. End of explanation """ text = open(path_to_file, "rb").read().decode(encoding="utf-8") print(f"Length of text: {len(text)} characters") """ Explanation: Read the data First, we'll download the file and then decode. End of explanation """ print(text[:250]) """ Explanation: Let's take a look at the first 250 characters in text End of explanation """ vocab = sorted(set(text)) print(f"{len(vocab)} unique characters") """ Explanation: Let's check to see how many unique characters are in our corpus/document. End of explanation """ example_texts = ["abcdefg", "xyz"] # TODO 1 chars = tf.strings.unicode_split(example_texts, input_encoding="UTF-8") chars """ Explanation: Process the text Vectorize the text Before training, you need to convert the strings to a numerical representation. Using tf.keras.layers.StringLookup layer can convert each character into a numeric ID. It just needs the text to be split into tokens first. End of explanation """ ids_from_chars = tf.keras.layers.StringLookup( vocabulary=list(vocab), mask_token=None ) """ Explanation: Now create the tf.keras.layers.StringLookup layer: End of explanation """ ids = ids_from_chars(chars) ids """ Explanation: It converts from tokens to character IDs: End of explanation """ chars_from_ids = tf.keras.layers.StringLookup( vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None ) """ Explanation: Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use tf.keras.layers.StringLookup(..., invert=True). Note: Here instead of passing the original vocabulary generated with sorted(set(text)) use the get_vocabulary() method of the tf.keras.layers.StringLookup layer so that the [UNK] tokens is set the same way. End of explanation """ chars = chars_from_ids(ids) chars """ Explanation: This layer recovers the characters from the vectors of IDs, and returns them as a tf.RaggedTensor of characters: End of explanation """ tf.strings.reduce_join(chars, axis=-1).numpy() def text_from_ids(ids): return tf.strings.reduce_join(chars_from_ids(ids), axis=-1) """ Explanation: You can tf.strings.reduce_join to join the characters back into strings. End of explanation """ # TODO 2 all_ids = ids_from_chars(tf.strings.unicode_split(text, "UTF-8")) all_ids ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids) for ids in ids_dataset.take(10): print(chars_from_ids(ids).numpy().decode("utf-8")) seq_length = 100 examples_per_epoch = len(text) // (seq_length + 1) """ Explanation: The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step. Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targets Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text. For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right. So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello". First use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices. End of explanation """ sequences = ids_dataset.batch(seq_length + 1, drop_remainder=True) for seq in sequences.take(1): print(chars_from_ids(seq)) """ Explanation: The batch method lets you easily convert these individual characters to sequences of the desired size. End of explanation """ for seq in sequences.take(5): print(text_from_ids(seq).numpy()) """ Explanation: It's easier to see what this is doing if you join the tokens back into strings: End of explanation """ def split_input_target(sequence): input_text = sequence[:-1] target_text = sequence[1:] return input_text, target_text split_input_target(list("Tensorflow")) dataset = sequences.map(split_input_target) for input_example, target_example in dataset.take(1): print("Input :", text_from_ids(input_example).numpy()) print("Target:", text_from_ids(target_example).numpy()) """ Explanation: For training you'll need a dataset of (input, label) pairs. Where input and label are sequences. At each time step the input is the current character and the label is the next character. Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep: End of explanation """ # Batch size BATCH_SIZE = 64 # Buffer size to shuffle the dataset # (TF data is designed to work with possibly infinite sequences, # so it doesn't attempt to shuffle the entire sequence in memory. Instead, # it maintains a buffer in which it shuffles elements). BUFFER_SIZE = 10000 dataset = ( dataset.shuffle(BUFFER_SIZE) .batch(BATCH_SIZE, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE) ) dataset """ Explanation: Create training batches You used tf.data to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches. End of explanation """ # Length of the vocabulary in chars vocab_size = len(vocab) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 """ Explanation: Build The Model This section defines the model as a keras.Model subclass (For details see Making new Layers and Models via subclassing). TODO 3 Build a model with the following layers tf.keras.layers.Embedding: The input layer. A trainable lookup table that will map each character-ID to a vector with embedding_dim dimensions; tf.keras.layers.GRU: A type of RNN with size units=rnn_units (You can also use an LSTM layer here.) tf.keras.layers.Dense: The output layer, with vocab_size outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model. End of explanation """ class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) # TODO - Create an embedding layer self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) # TODO - Create a GRU layer self.gru = tf.keras.layers.GRU( rnn_units, return_sequences=True, return_state=True ) # TODO - Finally connect it with a dense layer self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = self.embedding(inputs, training=training) # since we are training a text generation model, # we use the previous state, in training. If there is no state, # then we initialize the state if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x model = MyModel( # Be sure the vocabulary size matches the `StringLookup` layers. vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units, ) """ Explanation: The class below does the following: - We derive a class from tf.keras.Model - The constructor is used to define the layers of the model - We define the pass forward using the layers defined in the constructor End of explanation """ for input_example_batch, target_example_batch in dataset.take(1): example_batch_predictions = model(input_example_batch) print( example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)", ) """ Explanation: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character. Try the model Now run the model to see that it behaves as expected. First check the shape of the output: End of explanation """ model.summary() """ Explanation: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length: End of explanation """ sampled_indices = tf.random.categorical( example_batch_predictions[0], num_samples=1 ) sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy() """ Explanation: To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary. Note: It is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop. Try it for the first example in the batch: End of explanation """ sampled_indices """ Explanation: This gives us, at each timestep, a prediction of the next character index: End of explanation """ print("Input:\n", text_from_ids(input_example_batch[0]).numpy()) print() print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy()) """ Explanation: Decode these to see the text predicted by this untrained model: End of explanation """ # TODO - add a loss function here loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) example_batch_mean_loss = loss(target_example_batch, example_batch_predictions) print( "Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)", ) print("Mean loss: ", example_batch_mean_loss) """ Explanation: Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard tf.keras.losses.sparse_categorical_crossentropy loss function works in this case because it is applied across the last dimension of the predictions. Because your model returns logits, you need to set the from_logits flag. End of explanation """ tf.exp(example_batch_mean_loss).numpy() """ Explanation: A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized: End of explanation """ model.compile(optimizer="adam", loss=loss) """ Explanation: Configure the training procedure using the tf.keras.Model.compile method. Use tf.keras.optimizers.Adam with default arguments and the loss function. End of explanation """ # Directory where the checkpoints will be saved checkpoint_dir = "./training_checkpoints" # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True ) """ Explanation: Configure checkpoints Use a tf.keras.callbacks.ModelCheckpoint to ensure that checkpoints are saved during training: End of explanation """ EPOCHS = 10 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback]) """ Explanation: Execute the training To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. End of explanation """ class OneStep(tf.keras.Model): def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0): super().__init__() self.temperature = temperature self.model = model self.chars_from_ids = chars_from_ids self.ids_from_chars = ids_from_chars # Create a mask to prevent "[UNK]" from being generated. skip_ids = self.ids_from_chars(["[UNK]"])[:, None] sparse_mask = tf.SparseTensor( # Put a -inf at each bad index. values=[-float("inf")] * len(skip_ids), indices=skip_ids, # Match the shape to the vocabulary dense_shape=[len(ids_from_chars.get_vocabulary())], ) self.prediction_mask = tf.sparse.to_dense(sparse_mask) @tf.function def generate_one_step(self, inputs, states=None): # Convert strings to token IDs. input_chars = tf.strings.unicode_split(inputs, "UTF-8") input_ids = self.ids_from_chars(input_chars).to_tensor() # Run the model. # predicted_logits.shape is [batch, char, next_char_logits] predicted_logits, states = self.model( inputs=input_ids, states=states, return_state=True ) # Only use the last prediction. predicted_logits = predicted_logits[:, -1, :] predicted_logits = predicted_logits / self.temperature # Apply the prediction mask: prevent "[UNK]" from being generated. predicted_logits = predicted_logits + self.prediction_mask # Sample the output logits to generate token IDs. predicted_ids = tf.random.categorical(predicted_logits, num_samples=1) predicted_ids = tf.squeeze(predicted_ids, axis=-1) # Convert from token ids to characters predicted_chars = self.chars_from_ids(predicted_ids) # Return the characters and model state. return predicted_chars, states one_step_model = OneStep(model, chars_from_ids, ids_from_chars) """ Explanation: Generate text The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it. Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text. The following makes a single step prediction: End of explanation """ start = time.time() states = None next_char = tf.constant(["ROMEO:"]) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step( next_char, states=states ) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result[0].numpy().decode("utf-8"), "\n\n" + "_" * 80) print("\nRun time:", end - start) """ Explanation: Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. End of explanation """ start = time.time() states = None next_char = tf.constant(["ROMEO:", "ROMEO:", "ROMEO:", "ROMEO:", "ROMEO:"]) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step( next_char, states=states ) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result, "\n\n" + "_" * 80) print("\nRun time:", end - start) """ Explanation: The easiest thing you can do to improve the results is to train it for longer (try EPOCHS = 30). You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions. If you want the model to generate text faster the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above. End of explanation """ tf.saved_model.save(one_step_model, "one_step") one_step_reloaded = tf.saved_model.load("one_step") states = None next_char = tf.constant(["ROMEO:"]) result = [next_char] for n in range(100): next_char, states = one_step_reloaded.generate_one_step( next_char, states=states ) result.append(next_char) print(tf.strings.join(result)[0].numpy().decode("utf-8")) """ Explanation: Export the generator This single-step model can easily be saved and restored, allowing you to use it anywhere a tf.saved_model is accepted. End of explanation """ class CustomTraining(MyModel): @tf.function def train_step(self, inputs): inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) loss = self.loss(labels, predictions) grads = tape.gradient(loss, model.trainable_variables) self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) return {"loss": loss} """ Explanation: Advanced: Customized Training The above training procedure is simple, but does not give you much control. It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes. So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement curriculum learning to help stabilize the model's open-loop output. The most important part of a custom training loop is the train step function. Use tf.GradientTape to track the gradients. You can learn more about this approach by reading the eager execution guide. The basic procedure is: Execute the model and calculate the loss under a tf.GradientTape. Calculate the updates and apply them to the model using the optimizer. End of explanation """ model = CustomTraining( vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units, ) model.compile( optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), ) model.fit(dataset, epochs=1) """ Explanation: The above implementation of the train_step method follows Keras' train_step conventions. This is optional, but it allows you to change the behavior of the train step and still use keras' Model.compile and Model.fit methods. End of explanation """ EPOCHS = 10 mean = tf.metrics.Mean() for epoch in range(EPOCHS): start = time.time() mean.reset_states() for (batch_n, (inp, target)) in enumerate(dataset): logs = model.train_step([inp, target]) mean.update_state(logs["loss"]) if batch_n % 50 == 0: template = ( f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}" ) print(template) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: model.save_weights(checkpoint_prefix.format(epoch=epoch)) print() print(f"Epoch {epoch+1} Loss: {mean.result().numpy():.4f}") print(f"Time taken for 1 epoch {time.time() - start:.2f} sec") print("_" * 80) model.save_weights(checkpoint_prefix.format(epoch=epoch)) """ Explanation: Or if you need more control, you can write your own complete custom training loop: End of explanation """
quantumlib/OpenFermion
docs/tutorials/bosonic_operators.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The OpenFermion Developers End of explanation """ try: import openfermion except ImportError: !pip install git+https://github.com/quantumlib/OpenFermion.git@master#egg=openfermion """ Explanation: Introduction to the bosonic operators <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/openfermion/tutorials/bosonic_operators"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/tutorials/bosonic_operators.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/OpenFermion/blob/master/docs/tutorials/bosonic_operators.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/tutorials/bosonic_operators.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> Setup Install the OpenFermion package: End of explanation """ from openfermion.ops import BosonOperator my_term = BosonOperator(((3, 1), (5, 0), (4, 1), (1, 0))) print(my_term) my_term = BosonOperator('3^ 5 4^ 1') print(my_term) """ Explanation: The BosonOperator Bosonic systems, like Fermionic systems, are expressed using the bosonic creation and annihilation operators $b^\dagger_k$ and $b_k$ respectively. Unlike fermions, however, which satisfy the Pauli exclusion principle and thus are distinguished by the canonical fermionic anticommutation relations, the bosonic ladder operators instead satisfy a set of commutation relations: $$ \begin{align} & [b_i^\dagger, b_j^\dagger] = 0, ~~~ [b_i, b_j] = 0, ~~~ [b_i, b^\dagger_j] = \delta_{ij} \end{align} $$ Any weighted sums of products of these operators are represented with the BosonOperator data structure in OpenFermion. Similarly to when we introduced the FermionOperator, the following are examples of valid BosonOperators: $$ \begin{align} & a_1 \nonumber \ & 1.7 b^\dagger_3 \nonumber \ &-1.7 \, b^\dagger_3 b_1 \nonumber \ &(1 + 2i) \, b^\dagger_3 b^\dagger_4 b_1 b_9 \nonumber \ &(1 + 2i) \, b^\dagger_3 b^\dagger_4 b_1 b_9 - 1.7 \, b^\dagger_3 b_1 \nonumber \end{align} $$ The BosonOperator class is contained in ops/_boson_operators.py. The BosonOperator is derived from the SymbolicOperator, the same class that derives the FermionOperator. As such, the details of the class implementation are identical - as in the fermion case, the class is implemented as hash table (python dictionary). The keys of the dictionary encode the strings of ladder operators and values of the dictionary store the coefficients - the strings are subsequently encoded as a tuple of 2-tuples which we refer to as the "terms tuple". Each ladder operator is represented by a 2-tuple. The first element of the 2-tuple is an int indicating the quantum mode on which the ladder operator acts. The second element of the 2-tuple is Boole: 1 represents raising and 0 represents lowering. For instance, $b^\dagger_8$ is represented in a 2-tuple as $(8, 1)$. Note that indices start at 0 and the identity operator is an empty list. $$ \begin{align} I & \mapsto () \nonumber \ b_1 & \mapsto ((1, 0),) \nonumber \ b^\dagger_3 & \mapsto ((3, 1),) \nonumber \ b^\dagger_3 b_1 & \mapsto ((3, 1), (1, 0)) \nonumber \ b^\dagger_3 b^\dagger_4 b_1 b_9 & \mapsto ((3, 1), (4, 1), (1, 0), (9, 0)) \nonumber \end{align} $$ Alternatively, the BosonOperator supports the string-based syntax introduced in the FermionOperator; in this case, the terms are separated by spaces, with the integer corresponding to the quantum mode the operator acts on, and '^' indicating the Hermitian conjugate: $$ \begin{align} I & \mapsto \textrm{""} \nonumber \ b_1 & \mapsto \textrm{"1"} \nonumber \ b^\dagger_3 & \mapsto \textrm{"3^"} \nonumber \ b^\dagger_3 b_1 & \mapsto \textrm{"3^}\;\textrm{1"} \nonumber \ b^\dagger_3 b^\dagger_4 b_1 b_9 & \mapsto \textrm{"3^}\;\textrm{4^}\;\textrm{1}\;\textrm{9"} \nonumber \end{align} $$ <div class="alert alert-info"> Note that, unlike the `FermionOperator`, the bosonic creation operators of different indices commute. As a result, the `BosonOperator` automatically sorts groups of annihilation and creation operators in ascending order of the modes they act on. </div> Let's initialize our first term! We do it two different ways below. End of explanation """ good_way_to_initialize = BosonOperator('3^ 1', -1.7) print(good_way_to_initialize) bad_way_to_initialize = -1.7 * BosonOperator('3^ 1') print(bad_way_to_initialize) identity = BosonOperator('') print(identity == BosonOperator.identity()) print(identity) zero_operator = BosonOperator() print(zero_operator == BosonOperator.zero()) print(zero_operator) """ Explanation: Note the printed order differs from the code, since bosonic operators of different indices commute past each other. The preferred way to specify the coefficient in openfermion is to provide an optional coefficient argument. If not provided, the coefficient defaults to 1. In the code below, the first method is preferred. The multiplication in the second method actually creates a copy of the term, which introduces some additional cost. All inplace operands (such as +=) modify classes whereas binary operands such as + create copies. The additive and multiplicative identities can also be created: BosonOperator(()) and BosonOperator('') initialises the identity (BosonOperator.identity()). BosonOperator() and BosonOperator() initialises the zero operator (BosonOperator.zero()). End of explanation """ my_operator = BosonOperator('4^ 1^ 3 9', 1. + 2.j) print(my_operator) print(my_operator.terms) """ Explanation: Note that BosonOperator has only one attribute: .terms. This attribute is the dictionary which stores the term tuples. End of explanation """ from openfermion.utils import hermitian_conjugated, is_hermitian from openfermion.transforms import normal_ordered """ Explanation: Methods and functions that act on the BosonOperator There are various functions and methods that act on the BosonOperator; these include the ability to normal order, double check if the operator is Hermitian, and calculate the Hermitian conjugate. End of explanation """ H = BosonOperator('0 0^', 1. + 2.j) H.is_normal_ordered() normal_ordered(BosonOperator('0 0^', 1. + 2.j)) """ Explanation: normal_ordered_boson applies the bosonic commutation relations to write the operator using only normal-ordered terms; that is, that all creation operators are to the left of annihilation operators: End of explanation """ H.is_boson_preserving() H = BosonOperator('0 0^ 1^ ', 1. + 2.j) H.is_boson_preserving() """ Explanation: We can also use a boson operator method to check if the operator conserves the particle number - that is, for each qumode, the number of annihilation operators equals the number of creation operators. End of explanation """ is_hermitian(H) hermitian_conjugated(H) H = BosonOperator('0 1^', 1/2.) H += BosonOperator('1 0^', 1/2.) print(is_hermitian(H)) print(hermitian_conjugated(H)) """ Explanation: The Hermitian conjugated function returns the Hermitian conjugate of the operator, and its hermiticity can be checked using is_hermitian: End of explanation """ from openfermion.ops import QuadOperator H = QuadOperator('q0 p1 q3') print(H) print(H.terms) H2 = QuadOperator('q3 p4', 3.17) H2 -= 77. * H print('') print(H2) """ Explanation: The QuadOperator Using the bosonic ladder operators, it is common to define the canonical position and momentum operators $\hat{q}$ and $\hat{p}$: $$ \hat{q}_i = \sqrt{\frac{\hbar}{2}}(\hat{b}_i+\hat{b}^\dagger_i), ~~~ \hat{p}_i = -i\sqrt{\frac{\hbar}{2}}(\hat{b}_i-\hat{b}^\dagger_i)$$ These operators are Hermitian, and are referred to as the phase space quadrature operators. They satisfy the canonical commutation relation $$ [\hat{q}i, \hat{p}_j] = \delta{ij}i\hbar$$ where the value of $\hbar$ depends on convention, often taking values $\hbar=0.5$, $1$, or $2$. In OpenFermion, the quadrature operators are represented by the QuadOperator class, and stored as a dictionary of tuples (as keys) and coefficients (as values). For example, the multi-mode quadrature operator $q_0 p_1 q_3$ is represented internally as ((0, 'q'), (1, 'p'), (3, 'q')). Alternatively, QuadOperators also support string input - using string input, the same operator is described by 'q0 p1 q3'. End of explanation """ from openfermion.utils import hermitian_conjugated, is_hermitian """ Explanation: Note that quadrature operators of different indices commute; as such, like the BosonOperator, by default we sort quadrature operators such that the operators acting on the lowest numbered mode appear to the left. Methods and functions that act on the QuadOperator Like the BosonOperator, there are various functions and methods that act on the QuadOperator; these include the ability to normal order, double check if the operator is Hermitian, and calculate the Hermitian conjugate. End of explanation """ H = QuadOperator('p0 q0', 1. + 2.j) H.is_normal_ordered() normal_ordered(H) """ Explanation: normal_ordered_quad is an arbitrary convention chosen in OpenFermion that allows us to compare two quadrature operators that might be equivalent, but written in different forms. It is simply defined as a quadrature operator that has all of the position operators $\hat{q}$ to the left of the momentum operators $\hat{q}$. All quadrature operators can be placed in this 'normal form' by making use of the canonical commutation relation. End of explanation """ normal_ordered(H, hbar=2) """ Explanation: By default, we assume the value $\hbar=1$ in the canonical commutation relation, but this can be modified by passing the hbar keyword argument to the function: End of explanation """ H = QuadOperator('p0 q0', 1. + 2.j) H.is_gaussian() H = QuadOperator('p0 q0 q1', 1. + 2.j) H.is_gaussian() """ Explanation: We can also use a quad operator method to check if the operator is Gaussian - that is, all terms in the quad operator are of quadratic order or lower: End of explanation """ H = QuadOperator('p0 q1 p1', 1-2j) hermitian_conjugated(H) H = QuadOperator('p0 q0', 1/2.) H += QuadOperator('q0 p0', -1/2.) print(is_hermitian(H)) print(hermitian_conjugated(H)) H = QuadOperator('p0 q0', 1/2.) H += QuadOperator('q0 p0', 1/2.) print(is_hermitian(H)) print(hermitian_conjugated(H)) hermitian_conjugated(H) """ Explanation: The Hermitian conjugated function returns the Hermitian conjugate of the operator, and its hermiticity can be checked using is_hermitian: End of explanation """ from openfermion.transforms import get_boson_operator, get_quad_operator H = QuadOperator('p0 q0', 1/2.) H += QuadOperator('q0 p0', 1/2.) H get_boson_operator(H) """ Explanation: Converting between quadrature operators and bosonic operators Converting between bosonic ladder operators and quadrature operators is simple - we just apply the definition of the $\hat{q}$ and $\hat{p}$ operators in terms of $\hat{b}$ and $\hat{b}^\dagger$. Two functions are provided to do this automatically; get_quad_operator and get_boson_operator: End of explanation """ H = BosonOperator('0 0^') normal_ordered(get_quad_operator(H, hbar=0.5), hbar=0.5) """ Explanation: Note that, since these conversions are dependent on the value of $\hbar$ chosen, both accept a hbar keyword argument. As before, if not specified, the default value of $\hbar$ is hbar=1. End of explanation """ from openfermion.transforms import weyl_polynomial_quantization, symmetric_ordering """ Explanation: Weyl quantization and symmetric ordering We also provide support for the Weyl quantization - this maps a polynomial function of the form $$f(q_0,\dots,q_{N-1},p_0\dots,p_{N-1})=q_0^{m_0}\cdots q_{N-1}^{m_{N-1}} p_0^{m_0}\cdots p_{N-1}^{m_{N-1}}$$ on the phase space to the corresponding combination of quadrature operators $\hat{q}$ and $\hat{p}$. To do so, we make use of the McCoy formula, $$q^m p^n \rightarrow \frac{1}{2^n} \sum_{r=0}^{n} \binom{n}{r} q^r p^m q^{n-r}.$$ End of explanation """ weyl_polynomial_quantization('q0 p0') weyl_polynomial_quantization('q0^2 p0^3 q1^3') """ Explanation: For weyl_polynomial_quantization, the polynomial function in the phase space is provided in the form of a string, where 'q' or 'p' is the phase space quadrature variable, the integer directly following is the mode it is with respect to, and '^2' is the polynomial power. If the power is not provided, it is assumed to be '^1'. End of explanation """ symmetric_ordering(QuadOperator('q0 p0')) """ Explanation: McCoy's formula is also used to provide a function that returns the symmetric ordering of a BosonOperator or QuadOperator, $S(\hat{O})$. Note that $S(\hat{O})\neq \hat{O}$: End of explanation """ from openfermion.hamiltonians import number_operator n2 = number_operator(1, parity=1) * number_operator(1, parity=1) n2 Sn2 = symmetric_ordering(n2) Sn2 """ Explanation: Consider the symmetric ordering of the square of the bosonic number operator, $\hat{n} = \hat{b}^\dagger \hat{b}$: End of explanation """ Sn2 = normal_ordered(Sn2) Sn2 """ Explanation: We can use normal_ordered_boson to simplify this result: End of explanation """ Sn2 == normal_ordered(n2 + number_operator(1, parity=1) + 0.5*BosonOperator.identity()) """ Explanation: Therefore $S(\hat{n}) = \hat{b}^\dagger \hat{b}^\dagger \hat{b}\hat{b} + 2\hat{b}^\dagger \hat{b} + 0.5$. This is equivalent to $\hat{n}^2+\hat{n}+0.5$: End of explanation """ from openfermion.hamiltonians import bose_hubbard, fermi_hubbard bose_hubbard(2, 2, 1, 1) """ Explanation: Bose-Hubbard Hamiltonian In addition to the bosonic operators discussed above, we also provide Bosonic Hamiltonians that describe specific models. The Bose-Hubbard Hamiltonian over a discrete lattice or grid described by nodes $V={0,1,\dots,N-1}$ is described by: $$H = - t \sum_{\langle i, j \rangle} b_i^\dagger b_{j + 1} + \frac{U}{2} \sum_{k=1}^{N-1} b_k^\dagger b_k (b_k^\dagger b_k - 1) - \mu \sum_{k=1}^N b_k^\dagger b_k + V \sum_{\langle i, j \rangle} b_i^\dagger b_i b_j^\dagger b_j.$$ where The indices $\langle i, j \rangle$ run over pairs $i$ and $j$ of adjacenct nodes (nodes that are connected) in the grid $t$ is the tunneling amplitude $U$ is the on-site interaction potential $\mu$ is the chemical potential $V$ is the dipole or nearest-neighbour interaction potential The Bose-Hubbard Hamiltonian function provided in OpenFermion models a Bose-Hubbard model on a two-dimensional grid, with dimensions given by [x_dimension, y_dimension]. It has the form python bose_hubbard(x_dimension, y_dimension, tunneling, interaction, chemical_potential=0., dipole=0., periodic=True) where x_dimension (int): The width of the grid. y_dimension (int): The height of the grid. tunneling (float): The tunneling amplitude $t$. interaction (float): The attractive local interaction $U$. chemical_potential (float, optional): The chemical potential $\mu$ at each site. Default value is 0. periodic (bool, optional): If True, add periodic boundary conditions. Default is True. dipole (float): The attractive dipole interaction strength $V$. Below is an example of a Bose-Hubbard Hamiltonian constructed in OpenFermion. End of explanation """ from openfermion.linalg import boson_operator_sparse """ Explanation: Sparse bosonic operators Like the fermionic operators, OpenFermion contains the capability to represent bosonic operators as a sparse matrix (sparse.csc_matrix). However, as the fermionic operators can be represented as finite matrices, this is not the case of bosonic systems, as they inhabit a infinite-dimensional Fock space. Instead, a integer truncation value $N$ need to be provided - the returned sparse operator will be of size $N^{M}\times N^{M}$, where $M$ is the number of modes in the system, and acts on the truncated Fock basis ${\left|{0}\right\rangle, \left|{1}\right\rangle, \dots, \left|{N-1}\right\rangle}$. End of explanation """ H = boson_operator_sparse(BosonOperator('0^ 0'), 5) H.toarray() H = boson_operator_sparse(QuadOperator('q0'), 5, hbar=1) H.toarray() """ Explanation: The function boson_operator_sparse acts on both BosonOperators and QuadOperators: End of explanation """
NuGrid/NuPyCEE
DOC/Capabilities/AddingDataToStellab.ipynb
bsd-3-clause
%matplotlib nbagg import matplotlib.pyplot as plt from NuPyCEE import stellab as st """ Explanation: Adding Stellar Data to STELLAB Contributors: Christian Ritter In construction End of explanation """ s1=st.stellab() xaxis='[Fe/H]' yaxis='[O/Fe]' s1.plot_spectro(fig=1,xaxis=xaxis,galaxy='carina') plt.xlim(-4.5,1),plt.ylim(-1.5,1.5) """ Explanation: The goal is to add your data to STELLAB to produce plots such as the plot below: End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo("R3_EZlXTFBo") s1_new=st.stellab() # available data # s1_new.list_ref_papers() s1_new.plot_spectro(fig=2,yaxis=yaxis, obs=['stellab_data/carina_data/Fabrizio_et_al_2015_stellab'],show_err=True) plt.xlim(-4,0),plt.ylim(-2,2) """ Explanation: Adding your own data. End of explanation """ #from IPython.display import YouTubeVideo #YouTubeVideo("Pi9NpxAvYSs") """ Explanation: Uploading data coming soon... End of explanation """
OpenWeavers/openanalysis
doc/Langauge/14 - Inheritance.ipynb
gpl-3.0
class Person: # Constructor def __init__(self, name, age): self.name = name self.age = age def __str__(self): return 'name = {}\nage = {}'.format(self.name,self.age) # Inherited or Sub class class Employee(Person): def __init__(self, name, age, employee_id): Person.__init__(self, name, age) # Referring Base class # Can also be done by super(Employee, self).__init__(name, age) self.employee_id = employee_id # Overriding implied code reusability def __str__(self): return Person.__str__(self) + '\nemployee id = {}'.format(self.employee_id) s = Person('Kiran',18) print(s) e = Employee('Ramesh',18,48) print(e) """ Explanation: Inheritance Inheritance means extending the properties of one class by another. Inheritance implies code reusability, because of which client classes do not need to implement everything from scratch. They can simply refer to their base classes to execute the code. Unlike Java and C#, like C++, Python allows Multiple inheritance. Name resolution is done by the order in which the base classes are specified. Syntax python class ClassName(BaseClass1[,BaseClass2,....,BaseClassN]): &lt;statement 0&gt; &lt;statement 1&gt; &lt;statement 2&gt; ... ... ... &lt;statement n&gt; A First Example End of explanation """ class Base1: def some_method(self): print('Base1') class Base2: def some_method(self): print('Base2') class Derived1(Base1,Base2): pass class Derived2(Base2,Base1): pass """ Explanation: <div class="alert alert-info"> **Note** Base class can be referred from derived class in two ways - Base Class name - `BaseClass.function(self,args)` - using `super()` - `super(DerivedClass, self).function(args)` </div> Multiple inheritance and Order of Invocation of Methods End of explanation """ d1 = Derived1() d2 = Derived2() """ Explanation: Note how pass statement is used to leave the class body empty. Otherwise it would have raised a Syntax Error. Since Drived1 and Derived2 are empty, they would have imported the methods from their base classes End of explanation """ d1.some_method() d2.some_method() """ Explanation: Now what will be the result of invoking some_method on d1 and d2? ... Does the name clash ocuur? ... Let's see End of explanation """
eriksalt/jupyter
Python Quick Reference/Collections.ipynb
mit
from collections import deque dq = deque() dq.append(1) dq.append(2) dq.appendleft(3) dq v = dq.pop() v dq.popleft() dq """ Explanation: Python Collections Quick Reference Table Of Contents <a href="#1.-Deque">Deque</a> <a href="#2.-Heapq">Heapq</a> <a href="#3.-Counter">Counter</a> 1. Deque End of explanation """ dq = deque(maxlen = 3) for n in range(10): dq.append(n) dq """ Explanation: Using maxlen to limit the num of items in a deque End of explanation """ import heapq nums = [1, 8, 2, 23, 7, -4, 18, 23, 42, 37, 2] #heapq is created from a list heap = list(nums) heapq.heapify(heap) #now the 1st element is guarenteed to be the smallest heap heapq.heappop(heap) heap heapq.heappush(heap, -10) heap """ Explanation: 2. Heapq heapq provides O(1) access to the smallest item in the heap. End of explanation """ # nlargest and nsmallest wrap a heapq to provide its results print(heapq.nlargest(3, nums)) # Prints [42, 37, 23] print(heapq.nsmallest(3, nums)) # Prints [-4, 1, 2] # providing an alternate sort key to nlargest/nsmallest portfolio = [ {'name': 'IBM', 'shares': 100, 'price': 91.1}, {'name': 'AAPL', 'shares': 50, 'price': 543.22}, {'name': 'FB', 'shares': 200, 'price': 21.09}, {'name': 'HPQ', 'shares': 35, 'price': 31.75}, {'name': 'YHOO', 'shares': 45, 'price': 16.35}, {'name': 'ACME', 'shares': 75, 'price': 115.65} ] heapq.nsmallest(3, portfolio, key=lambda s: s['price']) """ Explanation: nlargest / nsmallest wraps creation of a heap for one-time access End of explanation """ words = [ 'look', 'into', 'my', 'eyes', 'look', 'into', 'my', 'eyes', 'the', 'eyes', 'the', 'eyes', 'the', 'eyes', 'not', 'around', 'the', 'eyes', "don't", 'look', 'around', 'the', 'eyes', 'look', 'into', 'my', 'eyes', "you're", 'under' ] from collections import Counter word_counts = Counter(words) #Works with any hashable items, not just strings! word_counts.most_common(3) morewords = ['why','are','you','not','looking','in','my','eyes'] for word in morewords: word_counts[word] += 1 word_counts.most_common(3) evenmorewords = ['seriously','look','into','them','while','i','look','at', 'you'] word_counts.update(evenmorewords) word_counts.most_common(3) a = Counter(words) b = Counter(morewords) c = Counter(evenmorewords) # combine counters d = b + c d # subtract counts e = a-d e """ Explanation: 3. Counter End of explanation """
jrg365/gpytorch
examples/07_Pyro_Integration/Clustered_Multitask_GP_Regression.ipynb
mit
import math import torch import pyro import gpytorch from matplotlib import pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 # this is for running the notebook in our testing framework import os smoke_test = ('CI' in os.environ) """ Explanation: Clustered Multitask GP (w/ Pyro/GPyTorch High-Level Interface) Introduction In this example, we use the Pyro integration for a GP model with additional latent variables. We are modelling a multitask GP in this example. Rather than assuming a linear correlation among the different tasks, we assume that there is cluster structure for the different tasks. Let's assume there are $k$ different clusters of tasks. The generative model for task $i$ is: $$ p(\mathbf y_i \mid \mathbf x_i) = \int \sum_{z_i=1}^k p(\mathbf y_i \mid \mathbf f (\mathbf x_i), z_i) \: p(z_i) \: p(\mathbf f (\mathbf x_i) ) \: d \mathbf f $$ where $z_i$ is the cluster assignment for task $i$. There are therefore $k$ latent functions $\mathbf f = [f_1 \ldots f_k]$, each modelled by a GP, representing each cluster. Our goal is therefore to infer: The latent functions $f_1 \ldots f_k$ The cluster assignments $z_i$ for each task End of explanation """ class ClusterGaussianLikelihood(gpytorch.likelihoods.Likelihood): def __init__(self, num_tasks, num_clusters): super().__init__() # These are parameters/buffers for the cluster assignment latent variables self.register_buffer("prior_cluster_logits", torch.zeros(num_tasks, num_clusters)) self.register_parameter("variational_cluster_logits", torch.nn.Parameter(torch.randn(num_tasks, num_clusters))) # The Gaussian observational noise self.register_parameter("raw_noise", torch.nn.Parameter(torch.tensor(0.0))) # Other info self.num_tasks = num_tasks self.num_clusters = num_clusters self.max_plate_nesting = 1 def pyro_guide(self, function_dist, target): # Here we add the extra variational distribution for the cluster latent variable pyro.sample( self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP pyro.distributions.OneHotCategorical(logits=self.variational_cluster_logits).to_event(1) ) return super().pyro_guide(function_dist, target) def pyro_model(self, function_dist, target): # Here we add the extra prior distribution for the cluster latent variable cluster_assignment_samples = pyro.sample( self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP pyro.distributions.OneHotCategorical(logits=self.prior_cluster_logits).to_event(1) ) return super().pyro_model(function_dist, target, cluster_assignment_samples=cluster_assignment_samples) def forward(self, function_samples, cluster_assignment_samples=None): # For inference, cluster_assignment_samples will be passed in # This bit of code is for when we use the likelihood in the predictive mode if cluster_assignment_samples is None: cluster_assignment_samples = pyro.sample( self.name_prefix + ".cluster_logits", self._cluster_dist(self.variational_cluster_logits) ) # Now we return the observational distribution, based on the function_samples and cluster_assignment_samples res = pyro.distributions.Normal( loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1), scale=torch.nn.functional.softplus(self.raw_noise).sqrt() ).to_event(1) return res """ Explanation: Adding additional latent variables to the likelihood The standard GPyTorch variational objects will take care of inferring the latent functions $f_1 \ldots f_k$. However, we do need to add the additional latent variables $z_i$ to the models. We will do so by creating a custom likelihood that models: $$ \sum_{z_i=1}^k p(\mathbf y_i \mid \mathbf f (\mathbf x_i), z_i) \: p(z_i) $$ GPyTorch's likelihoods are capable of modeling additional latent variables. Our custom likelihood needs to define the following three functions: pyro_model (needs to call through to super().pyro_model at the end), which defines the prior distribution for additional latent variables pyro_guide (needs to call through to super().pyro_guide at the end), which defines the variational (guide) distribution for additional latent variables forward, which defines the observation distributions conditioned on \mathbf f (\mathbf x_i) and any additional latent variables. The pyro_model function For each task, we will model the cluster assignment with a OneHotCategorical variable, where each cluster has equal probability. The pyro_model function will make a pyro.sample call to this prior distribution and then call the super method: ```python # self.prior_cluster_logits = torch.zeros(num_tasks, num_clusters) def pyro_model(self, function_dist, target): cluster_assignment_samples = pyro.sample( self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP pyro.distributions.OneHotCategorical(logits=self.prior_cluster_logits).to_event(1) ) return super().pyro_model( function_dist, target, cluster_assignment_samples=cluster_assignment_samples ) ``` Note that we are adding an additional argument cluster_assignment_samples to the super().pyro_model call. This will pass the cluster assignment samples to the forward call, which is necessary for inference. The pyro_guide function For each task, the variational (guide) diustribution will also be a OneHotCategorical variable, which will be defined by the parameter self.variational_cluster_logits. The pyro_guide function will make a pyro.sample call to this prior distribution and then call the super method: python def pyro_guide(self, function_dist, target): pyro.sample( self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP pyro.distributions.OneHotCategorical(logits=self.variational_cluster_logits).to_event(1) ) return super().pyro_guide(function_dist, target) Note that we are adding an additional argument cluster_assignment_samples to the super().pyro_model call. This will pass the cluster assignment samples to the forward call, which is necessary for inference. The forward function The pyro_model fuction passes the additional keyword argument cluster_assignment_samples to the forward call. Therefore, our forward method will define the conditional probability $p(\mathbf y_i \mid \mathbf f(\mathbf x), z_i)$, where $\mathbf f(\mathbf x)$ corresponds to the variable function_samples and $z_i$ corresponds to the variable cluster_assignment_samples. In our example $p(\mathbf y_i \mid \mathbf f(\mathbf x), z_i)$ corresponds to a Gaussian noise model. ``python # self.raw_noise is the Gaussian noise parameter # function_samples isn x k# cluster_assignment_samples isk x t, wheret` is the number of tasks def forward(self, function_samples, cluster_assignment_samples): return pyro.distributions.Normal( loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1), scale=torch.nn.functional.softplus(self.raw_noise).sqrt() ).to_event(1) # The to_event call is necessary because we are returning a multitask distribution, # where each task dimension corresponds to each of the `t` tasks ``` This is all we need for inference! However, if we want to use this model to make predictions, the cluster_assignment_samples keyword argument will not be passed into the function. Therefore, we need to make sure that forward can handle both inference and predictions: ```python def forward(self, function_samples, cluster_assignment_samples=None): if cluster_assignment_samples is None: # We'll get here at prediction time # We'll use the variational distribution when making predictions cluster_assignment_samples = pyro.sample( self.name_prefix + ".cluster_logits", self._cluster_dist(self.variational_cluster_logits) ) return pyro.distributions.Normal( loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1), scale=torch.nn.functional.softplus(self.raw_noise).sqrt() ).to_event(1) ``` End of explanation """ class ClusterMultitaskGPModel(gpytorch.models.pyro.PyroGP): def __init__(self, train_x, train_y, num_functions=2, reparam=False): num_data = train_y.size(-2) # Define all the variational stuff inducing_points = torch.linspace(0, 1, 64).unsqueeze(-1) variational_distribution = gpytorch.variational.CholeskyVariationalDistribution( num_inducing_points=inducing_points.size(-2), batch_shape=torch.Size([num_functions]) ) # Here we're using a IndependentMultitaskVariationalStrategy - so that the output of the # GP latent function is a MultitaskMultivariateNormal variational_strategy = gpytorch.variational.IndependentMultitaskVariationalStrategy( gpytorch.variational.VariationalStrategy(self, inducing_points, variational_distribution), num_tasks=num_functions, ) # Standard initializtation likelihood = ClusterGaussianLikelihood(train_y.size(-1), num_functions) super().__init__(variational_strategy, likelihood, num_data=num_data, name_prefix=str(time.time())) self.likelihood = likelihood self.num_functions = num_functions # Mean, covar self.mean_module = gpytorch.means.ZeroMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) res = gpytorch.distributions.MultivariateNormal(mean_x, covar_x) return res """ Explanation: Constructing the PyroGP model The PyroGP model is essentially the same as the model we used in the simple example, except for two changes We now will use our more complicated ClusterGaussianLikelihood The latent function should be vector valued to correspond to the k latent functions. As a result, we will learn a batched variational distribution, and use a IndependentMultitaskVariationalStrategy to convert the batched variational distribution into a MultitaskMultivariateNormal distribution. End of explanation """
deepfield/ibis
docs/source/notebooks/tutorial/10-Adding-a-new-reduction-expression.ipynb
apache-2.0
import ibis.expr.datatypes as dt import ibis.expr.rules as rlz from ibis.expr.operations import Reduction, Arg class BitwiseAnd(Reduction): arg = Arg(rlz.column(rlz.integer)) where = Arg(rlz.boolean, default=None) output_type = rlz.scalar_like('arg') """ Explanation: Extending Ibis Part 2: Adding a New Reduction Expression This notebook will show you how to add a new reduction operation (bitwise_and) to an existing backend (PostgreSQL). A reduction operation is a function that maps $N$ rows to 1 row, for example the sum function. Description We're going to add a bitwise_and function to ibis. bitwise_and computes the logical AND of the individual bits of an integer. For example, ``` 0101 0111 0011 & 1101 0001 ``` Step 1: Define the Operation Let's define the bitwise_and operation as a function that takes any integer typed column as input and returns an integer haskell bitwise_and :: Column Int -&gt; Int End of explanation """ from ibis.expr.types import IntegerColumn # not IntegerValue! reductions are only valid on columns def bitwise_and(integer_column, where=None): return BitwiseAnd(integer_column, where=where).to_expr() IntegerColumn.bitwise_and = bitwise_and """ Explanation: We just defined a BitwiseAnd class that takes one integer column as input, and returns a scalar output of the same type as the input. This matches both the requirements of a reduction and the spepcifics of the function that we want to implement. Note: It is very important that you write the correct argument rules and output type here. The expression will not work otherwise. Step 2: Define the API Because every reduction in ibis has the ability to filter out values during aggregation (a typical feature in databases and analytics tools), to make an expression out of BitwiseAnd we need to pass an additional argument: where to our BitwiseAnd constructor. End of explanation """ import ibis t = ibis.table([('bigint_col', 'int64'), ('string_col', 'string')], name='t') t.bigint_col.bitwise_and() t.bigint_col.bitwise_and(t.string_col == '1') """ Explanation: Interlude: Create some expressions using bitwise_and End of explanation """ import sqlalchemy as sa @ibis.postgres.compiles(BitwiseAnd) def compile_sha1(translator, expr): # pull out the arguments to the expression arg, where = expr.op().args # compile the argument compiled_arg = translator.translate(arg) # call the appropriate postgres function agg = sa.func.bit_and(compiled_arg) # handle a non-None filter clause if where is not None: return agg.filter(translator.translate(where)) return agg """ Explanation: Step 3: Turn the Expression into SQL End of explanation """ con = ibis.postgres.connect( user='postgres', host='postgres', password='postgres', database='ibis_testing' ) """ Explanation: Step 4: Putting it all Together Connect to the ibis_testing database NOTE: To be able to execute the rest of this notebook you need to run the following command from your ibis clone: sh ci/build.sh End of explanation """ t = con.table('functional_alltypes') t expr = t.bigint_col.bitwise_and() expr sql_expr = expr.compile() print(sql_expr) expr.execute() """ Explanation: Create and execute a bitwise_and expression End of explanation """ expr = t.bigint_col.bitwise_and(where=(t.bigint_col == 10) | (t.bigint_col == 40)) expr result = expr.execute() result """ Explanation: Let's see what a bitwise_and call looks like with a where argument End of explanation """ 10 & 40 print(' {:0>8b}'.format(10)) print('& {:0>8b}'.format(40)) print('-' * 10) print(' {:0>8b}'.format(10 & 40)) """ Explanation: Let's confirm that taking bitwise AND of 10 and 40 is in fact 8 End of explanation """
mne-tools/mne-tools.github.io
0.19/_downloads/2b9ae87368ee06cd9589fd87e1be1d30/plot_time_frequency_mixed_norm_inverse.ipynb
bsd-3-clause
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse from mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles from mne.viz import (plot_sparse_source_estimates, plot_dipole_locations, plot_dipole_amplitudes) print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' ave_fname = data_path + '/MEG/sample/sample_audvis-no-filter-ave.fif' cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif' # Read noise covariance matrix cov = mne.read_cov(cov_fname) # Handling average file condition = 'Left visual' evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0)) evoked = mne.pick_channels_evoked(evoked) # We make the window slightly larger than what you'll eventually be interested # in ([-0.05, 0.3]) to avoid edge effects. evoked.crop(tmin=-0.1, tmax=0.4) # Handling forward solution forward = mne.read_forward_solution(fwd_fname) """ Explanation: Compute MxNE with time-frequency sparse prior The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA) that promotes focal (sparse) sources (such as dipole fitting techniques) [1] [2]. The benefit of this approach is that: it is spatio-temporal without assuming stationarity (sources properties can vary over time) activations are localized in space, time and frequency in one step. with a built-in filtering process based on a short time Fourier transform (STFT), data does not need to be low passed (just high pass to make the signals zero mean). the solver solves a convex optimization problem, hence cannot be trapped in local minima. References .. [1] A. Gramfort, D. Strohmeier, J. Haueisen, M. Hamalainen, M. Kowalski "Time-Frequency Mixed-Norm Estimates: Sparse M/EEG imaging with non-stationary source activations", Neuroimage, Volume 70, pp. 410-422, 15 April 2013. DOI: 10.1016/j.neuroimage.2012.12.051 .. [2] A. Gramfort, D. Strohmeier, J. Haueisen, M. Hamalainen, M. Kowalski "Functional Brain Imaging with M/EEG Using Structured Sparsity in Time-Frequency Dictionaries", Proceedings Information Processing in Medical Imaging Lecture Notes in Computer Science, Volume 6801/2011, pp. 600-611, 2011. DOI: 10.1007/978-3-642-22092-0_49 End of explanation """ # alpha parameter is between 0 and 100 (100 gives 0 active source) alpha = 40. # general regularization parameter # l1_ratio parameter between 0 and 1 promotes temporal smoothness # (0 means no temporal regularization) l1_ratio = 0.03 # temporal regularization parameter loose, depth = 0.2, 0.9 # loose orientation & depth weighting # Compute dSPM solution to be used as weights in MxNE inverse_operator = make_inverse_operator(evoked.info, forward, cov, loose=loose, depth=depth) stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9., method='dSPM') # Compute TF-MxNE inverse solution with dipole output dipoles, residual = tf_mixed_norm( evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio, loose=loose, depth=depth, maxit=200, tol=1e-6, weights=stc_dspm, weights_min=8., debias=True, wsize=16, tstep=4, window=0.05, return_as_dipoles=True, return_residual=True) # Crop to remove edges for dip in dipoles: dip.crop(tmin=-0.05, tmax=0.3) evoked.crop(tmin=-0.05, tmax=0.3) residual.crop(tmin=-0.05, tmax=0.3) """ Explanation: Run solver End of explanation """ plot_dipole_amplitudes(dipoles) # Plot dipole location of the strongest dipole with MRI slices idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles]) plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample', subjects_dir=subjects_dir, mode='orthoview', idx='amplitude') # # Plot dipole locations of all dipoles with MRI slices # for dip in dipoles: # plot_dipole_locations(dip, forward['mri_head_t'], 'sample', # subjects_dir=subjects_dir, mode='orthoview', # idx='amplitude') """ Explanation: Plot dipole activations End of explanation """ ylim = dict(grad=[-120, 120]) evoked.pick_types(meg='grad', exclude='bads') evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim, proj=True, time_unit='s') residual.pick_types(meg='grad', exclude='bads') residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim, proj=True, time_unit='s') """ Explanation: Show the evoked response and the residual for gradiometers End of explanation """ stc = make_stc_from_dipoles(dipoles, forward['src']) """ Explanation: Generate stc from dipoles End of explanation """ plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1), opacity=0.1, fig_name="TF-MxNE (cond %s)" % condition, modes=['sphere'], scale_factors=[1.]) time_label = 'TF-MxNE time=%0.2f ms' clim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9]) brain = stc.plot('sample', 'inflated', 'rh', views='medial', clim=clim, time_label=time_label, smoothing_steps=5, subjects_dir=subjects_dir, initial_time=150, time_unit='ms') brain.add_label("V1", color="yellow", scalar_thresh=.5, borders=True) brain.add_label("V2", color="red", scalar_thresh=.5, borders=True) """ Explanation: View in 2D and 3D ("glass" brain like 3D plot) End of explanation """
danielfather7/teach_Python
SEDS_Hw/seds-hw-2-procedural-python-part-1-danielfather7/SEDS-HW2.ipynb
gpl-3.0
import os filename = 'HCEPDB_moldata.zip' if os.path.exists(filename): print('File already exists.') else: print("File doesn't exist.") import requests url = 'http://faculty.washington.edu/dacb/HCEPDB_moldata.zip' req = requests.get(url) assert req.status_code == 200 with open(filename, 'wb') as f: f.write(req.content) import zipfile import pandas as pd csv_filename = 'HCEPDB_moldata.csv' zf = zipfile.ZipFile(filename) data = pd.read_csv(zf.open(csv_filename)) data.head() """ Explanation: Part 1 : For a single file End of explanation """ import os import requests import zipfile import pandas as pd zipfiles = ['HCEPDB_moldata_set1.zip','HCEPDB_moldata_set2.zip','HCEPDB_moldata_set3.zip'] url = {'HCEPDB_moldata_set1.zip':'http://faculty.washington.edu/dacb/HCEPDB_moldata_set1.zip','HCEPDB_moldata_set2.zip':'http://faculty.washington.edu/dacb/HCEPDB_moldata_set2.zip','HCEPDB_moldata_set3.zip':'http://faculty.washington.edu/dacb/HCEPDB_moldata_set3.zip'} csvfile = {'HCEPDB_moldata_set1.zip':'HCEPDB_moldata_set1.csv','HCEPDB_moldata_set2.zip':'HCEPDB_moldata_set2.csv','HCEPDB_moldata_set3.zip':'HCEPDB_moldata_set3.csv'} zf = [] data = [] alldata = pd.DataFrame() for i in range(len(zipfiles)): #check whether file exists. if os.path.exists(zipfiles[i]): print(zipfiles[i],'exists.') else: print(zipfiles[i],"doesn't exist.") #Download files. print(zipfiles[i],'is downloading.') req = requests.get(url[zipfiles[i]]) assert req.status_code == 200 with open(zipfiles[i], 'wb') as f: f.write(req.content) print(zipfiles[i],'is downloaded.') #Unzip and read .csv files. zf.append(zipfile.ZipFile(zipfiles[i])) data.append(pd.read_csv(zf[i].open(csvfile[zipfiles[i]]))) alldata = alldata.append(data[i],ignore_index=True) #Check data print('\nCheck data') print('shape of',csvfile[zipfiles[0]],'=',data[0].shape,'\nshape of',csvfile[zipfiles[1]],'=',data[1].shape,'\nshape of',csvfile[zipfiles[2]],'=',data[2].shape, '\nshape of all data =',alldata.shape) print('\n') alldata.tail() """ Explanation: Part 2 : For three or more files Set 1: download and unzip files, and read data. Create a list for all files, and two dictionaries to conect to their url and file name of .csv. Check which file exists by using os.path.exists in for and if loop, and print out results. Only download files which don't exist by putting code in else loop. Add some print commands in the loop to show which file is downloading and tell after it is done. Unzip the files, and use zf list and data lits to read 3 .csv files respectively. <span style="color:red">Since 3 sets of data are the same kind of data, I first creat a blank data frame outside the for loop, and then use append command to merge all the data. Use shape and tail command to check data. End of explanation """ import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import math alldata['(xi-x)^2'] = (alldata['mass'] - alldata['mass'].mean())**2 SD = math.sqrt(sum(alldata['(xi-x)^2'])/alldata.shape[0]) M = alldata['mass'].mean() print('standard diviation of mass = ',SD,', mean of mass = ',M,"\n") alldata['mass_group'] = pd.cut(alldata['mass'],bins=[min(alldata['mass']),M-3*SD,M-2*SD,M-SD,M+SD,M+2*SD,M+3*SD,max(alldata['mass'])],labels=["<(-3SD)","-3SD~-2SD","-2SD~-SD","-SD~+SD","+SD~+2SD","+2SD~+3SD",">(+3SD)"]) count = pd.value_counts(alldata['mass_group'],normalize=True) print("Count numbers in each group(%)\n",count,"\n") print("within 1 standard diviation:",count[3],"\nwithin 2 standard diviation:",count[2]+count[3]+count[4],"\nwithin 3 standard diviation:",count[2]+count[3]+count[4]+count[1]+count[5],"\n") print("Conclusions: mass is nearly normal distribution!") """ Explanation: Set 2: analyza data End of explanation """
NICTA/revrand
demos/reparameterization_trick.ipynb
apache-2.0
%matplotlib inline import numpy as np import matplotlib.pyplot as pl pl.style.use('ggplot') from scipy.stats import norm from scipy.special import expit from scipy.integrate import quadrature from scipy.misc import derivative from revrand.mathfun.special import softplus from revrand.optimize import sgd, Adam # Initial values x = 0 mu = 2 sigma = 3 lambd = 0.5 L = 50 # The test exact = norm.logpdf(x, loc=mu, scale=sigma) - lambd / (2 * sigma**2) print("Exact expectation = {}".format(exact)) # Normal Monte Calo estimation z = norm.rvs(loc=mu, scale=np.sqrt(lambd), size=(L,)) approx_mc = norm.logpdf(x, loc=z, scale=sigma) print("MC Approx expectation = {} ({})".format(approx_mc.mean(), approx_mc.std())) # Reparameterised Sampling g = lambda e: mu + np.sqrt(lambd) * e e = norm.rvs(loc=0, scale=1, size=(L,)) approx_re = norm.logpdf(x, loc=g(e), scale=sigma) print("Reparameterized Approx expectation = {} ({})".format(approx_re.mean(), approx_re.std())) """ Explanation: Testing out the reparameterization trick Just a simple implementation to test if it will be appropriate for the GLM, if it is, we can use Auto-Encoding Variational Bayes inference. The basic premise is we can construct a differenctiable Monte-Carlo estimator, $$ \mathbb{E}{q(z)}[f(z)] = \int q{\theta}(z|x) f(z) dz \approx \frac{1}{L} \sum^L_{l=1} f(g_{\theta}(x, \epsilon^{(l)})), $$ where $$ z^{(l)} = g_{\theta}(x, \epsilon^{(l)}) \qquad \text{and} \qquad \epsilon^{(l)} \sim p(\epsilon), $$ that results in lower variance derivatives than Monte-Carlo sampling the derivatives using, e.g. variational black box methods. Test 1: $f(z)$ is a log-Normal Likelihood approximation Let's start with a really simple example, $$ \begin{align} f(z) &= \log \mathcal{N}(x|z, \sigma^2), \ q_\theta(z | x) &= \mathcal{N}(z | \mu, \lambda). \end{align} $$ We can solve this integral analytically, $$ \int \mathcal{N}(z | \mu, \lambda) \log \mathcal{N}(x|z, \sigma^2) dz = \log \mathcal{N}(x | \mu, \sigma^2) - \frac{\lambda}{2 \sigma^2} $$ So we can test how this compares to the reparameterization trick results. lets use the following deterministic function for reparameterization, $$ g_{(\mu, \lambda)}(\epsilon^{(l)}) = \mu + \sqrt{\lambda}\epsilon^{(l)} $$ where $$ p(\epsilon) = \mathcal{N}(0, 1) $$ Now let's test: $$ \log \mathcal{N}(x | \mu, \sigma^2) - \frac{\lambda}{2 \sigma^2} \stackrel{?}{\approx} \frac{1}{L} \sum^L_{l=1} \log \mathcal{N}(x|,g_{(\mu, \lambda)}(\epsilon^{(l)}), \sigma^2) $$ End of explanation """ # A range of mu's N = 100 mu = np.linspace(-5, 5, N) # Exact dmu = (x - mu) / sigma**2 # Approx e = norm.rvs(loc=0, scale=1, size=(L, N)) approx_dmu = (x - g(e)) / sigma**2 Edmu = approx_dmu.mean(axis=0) Sdmu = approx_dmu.std(axis=0) # plot pl.figure(figsize=(15, 10)) pl.plot(mu, dmu, 'b', label='Exact') pl.plot(mu, Edmu, 'r', label= 'Approx') pl.fill_between(mu, Edmu - 2 * Sdmu, Edmu + 2 * Sdmu, edgecolor='none', color='r', alpha=0.3) pl.legend() pl.title("Derivatives of expected log Gaussian") pl.xlabel('$\mu$') pl.ylabel('$\partial f(z)/ \partial \mu$') pl.show() """ Explanation: We would expect a trivial relationship here between exact monte-carlo and the reparameterization trick, since they are doing the same thing. Lets see if gradient estimates have lower variances now. Gradient approximation Let's evaluate the exact gradient for $\mu$, $$ \frac{\partial}{\partial \mu} \left(\log \mathcal{N}(x | \mu, \sigma^2) - \frac{\lambda}{2 \sigma^2} \right) = \frac{1}{\sigma^2} (x - \mu) $$ Now the approximation $$ \begin{align} \frac{\partial}{\partial \mu} \left( \frac{1}{L} \sum^L_{l=1} \log \mathcal{N}(x|,g_{(\mu, \lambda)}(\epsilon^{(l)}), \sigma^2) \right) &= \frac{1}{L} \sum^L_{l=1} \frac{1}{\sigma^2} (x - g_{(\mu, \lambda)}(\epsilon^{(l)})) \frac{\partial g_{(\mu, \lambda)}(\epsilon^{(l)})}{\partial \mu}, \ &= \frac{1}{L} \sum^L_{l=1} \frac{1}{\sigma^2} (x - g_{(\mu, \lambda)}(\epsilon^{(l)})). \end{align} $$ End of explanation """ # Quadrature def qlogp(z, mu): q = norm.pdf(z, loc=mu, scale=np.sqrt(lambd)) logp = x * z - softplus(z) return q * logp def quadELL(mu): return quadrature(qlogp, a=-10, b=10, args=(mu,))[0] ELL = [quadELL(m) for m in mu] # Reparam e = norm.rvs(loc=0, scale=1, size=(L, N)) approx_ELL = x * g(e) - softplus(g(e)) EELL = approx_ELL.mean(axis=0) SELL = approx_ELL.std(axis=0) # plot pl.figure(figsize=(15, 10)) pl.plot(mu, ELL, 'b', label='Quadrature') pl.plot(mu, EELL, 'r', label= 'Approx') pl.fill_between(mu, EELL - 2 * SELL, EELL + 2 * SELL, edgecolor='none', color='r', alpha=0.3) pl.legend() pl.title("ELL with log Bernoulli") pl.xlabel('$\mu$') pl.ylabel('$\mathbb{E}[\log Bern(x | z)]$') pl.show() """ Explanation: Test 2: $f(z)$ is log Bernoulli Now let's try the following function with the same posterior and $g$ as before, $$ f(z) = \log \text{Bern}(x | \text{logistic}(z)) = x z - \log(1 + exp(z)) $$ We can get an "exact" expectation using quadrature. First of all, likelihoods, Likelihood Approximation End of explanation """ # Quadrature dmu = [derivative(quadELL, m) for m in mu] # Reparam e = norm.rvs(loc=0, scale=1, size=(L, N)) approx_dmu = x - expit(g(e)) Edmu = approx_dmu.mean(axis=0) Sdmu = approx_dmu.std(axis=0) # plot pl.figure(figsize=(15, 10)) pl.plot(mu, dmu, 'b', label='Quadrature') pl.plot(mu, Edmu, 'r', label= 'Approx') pl.fill_between(mu, Edmu - 2 * Sdmu, Edmu + 2 * Sdmu, edgecolor='none', color='r', alpha=0.3) pl.legend() pl.title("Derivative of $\mu$ with log Bernoulli") pl.xlabel('$\mu$') pl.ylabel('$\partial f(z)/ \partial \mu$') pl.show() """ Explanation: Gradient approximation $$ \begin{align} \frac{\partial}{\partial \mu} \mathbb{E}q \left[\frac{\partial f(z)}{\partial \mu} \right] &\approx \frac{1}{L} \sum^L{l=1} (x - \text{logistic}(g(\epsilon^{(l)}))) \frac{\partial g(\epsilon^{(l)})}{\partial \mu} \ &= \frac{1}{L} \sum^L_{l=1} x - \text{logistic}(g(\epsilon^{(l)})) \end{align} $$ End of explanation """ data = np.ones((100, 1), dtype=bool) mu_rec, dmu_rec = [], [] def ell_obj(mu, x, samples=100): e = norm.rvs(loc=0, scale=1, size=(samples,)) g = mu + np.sqrt(lambd) * e ll = (x * g - softplus(g)).mean() dmu = (x - expit(g)).mean() mu_rec.append(float(mu)) dmu_rec.append(float(dmu)) return -ll, -dmu res = sgd(ell_obj, x0=np.array([-4]), data=data, maxiter=1000, updater=Adam(), eval_obj=True) # plot niter = len(mu_rec) fig = pl.figure(figsize=(15, 10)) ax1 = fig.add_subplot(111) ax1.plot(range(niter), res.norms, 'b', label='gradients') ax1.plot(range(niter), res.objs, 'g', label='negative ELL') ax1.set_ylabel('gradients/negative ELL') ax1.legend() for t in ax1.get_yticklabels(): t.set_color('b') ax2 = ax1.twinx() ax2.set_ylabel('$\mu$') ax2.plot(range(niter), mu_rec, 'r', label='$\mu$') for t in ax2.get_yticklabels(): t.set_color('r') pl.show() """ Explanation: Optimisation test Now let's see if we can optimise Expected log likelihood using SG! End of explanation """
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
Neural Networks and Deep Learning/Python_Basics_With_Numpy_v3.ipynb
mit
### START CODE HERE ### (≈ 1 line of code) test = 'Hello World' ### END CODE HERE ### print ("test: " + test) """ Explanation: Python Basics with Numpy (optional assignment) Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. Instructions: - You will be using Python 3. - Avoid using for-loops and while-loops, unless you are explicitly told to do so. - Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function. - After coding your function, run the cell right below it to check if your result is correct. After this assignment you will: - Be able to use iPython Notebooks - Be able to use numpy functions and numpy matrix/vector operations - Understand the concept of "broadcasting" - Be able to vectorize code Let's get started! About iPython Notebooks iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook. We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter. Exercise: Set test to "Hello World" in the cell below to print "Hello World" and run the two cells below. End of explanation """ # GRADED FUNCTION: basic_sigmoid import math def basic_sigmoid(x): """ Compute sigmoid of x. Arguments: x -- A scalar Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) s = 1/(1+math.exp(-x)) ### END CODE HERE ### return s basic_sigmoid(3) """ Explanation: Expected output: test: Hello World <font color='blue'> What you need to remember: - Run your cells using SHIFT+ENTER (or "Run cell") - Write code in the designated areas using Python 3 only - Do not modify the code outside of the designated areas 1 - Building basic functions with numpy Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments. 1.1 - sigmoid function, np.exp() Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp(). Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function. Reminder: $sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning. <img src="images/Sigmoid.png" style="width:500px;height:228px;"> To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp(). End of explanation """ ### One reason why we use "numpy" instead of "math" in Deep Learning ### x = [1, 2, 3] basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector. """ Explanation: Expected Output: <table style = "width:40%"> <tr> <td>** basic_sigmoid(3) **</td> <td>0.9525741268224334 </td> </tr> </table> Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful. End of explanation """ import numpy as np # example of np.exp x = np.array([1, 2, 3]) print(np.exp(x)) # result is (exp(1), exp(2), exp(3)) """ Explanation: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$ End of explanation """ # example of vector operation x = np.array([1, 2, 3]) print (x + 3) """ Explanation: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x. End of explanation """ # GRADED FUNCTION: sigmoid import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function() def sigmoid(x): """ Compute the sigmoid of x Arguments: x -- A scalar or numpy array of any size Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) s = 1/(1+np.exp(-x)) ### END CODE HERE ### return s x = np.array([1, 2, 3]) sigmoid(x) """ Explanation: Any time you need more info on a numpy function, we encourage you to look at the official documentation. You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation. Exercise: Implement the sigmoid function using numpy. Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now. $$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix} x_1 \ x_2 \ ... \ x_n \ \end{pmatrix} = \begin{pmatrix} \frac{1}{1+e^{-x_1}} \ \frac{1}{1+e^{-x_2}} \ ... \ \frac{1}{1+e^{-x_n}} \ \end{pmatrix}\tag{1} $$ End of explanation """ # GRADED FUNCTION: sigmoid_derivative def sigmoid_derivative(x): """ Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x. You can store the output of the sigmoid function into variables and then use it to calculate the gradient. Arguments: x -- A scalar or numpy array Return: ds -- Your computed gradient. """ ### START CODE HERE ### (≈ 2 lines of code) s = sigmoid(x) ds = s*(1-s) ### END CODE HERE ### return ds x = np.array([1, 2, 3]) print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x))) """ Explanation: Expected Output: <table> <tr> <td> **sigmoid([1,2,3])**</td> <td> array([ 0.73105858, 0.88079708, 0.95257413]) </td> </tr> </table> 1.2 - Sigmoid gradient As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function. Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$ You often code this function in two steps: 1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful. 2. Compute $\sigma'(x) = s(1-s)$ End of explanation """ # GRADED FUNCTION: image2vector def image2vector(image): """ Argument: image -- a numpy array of shape (length, height, depth) Returns: v -- a vector of shape (length*height*depth, 1) """ ### START CODE HERE ### (≈ 1 line of code) v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2],1)) ### END CODE HERE ### return v # This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values image = np.array([[[ 0.67826139, 0.29380381], [ 0.90714982, 0.52835647], [ 0.4215251 , 0.45017551]], [[ 0.92814219, 0.96677647], [ 0.85304703, 0.52351845], [ 0.19981397, 0.27417313]], [[ 0.60659855, 0.00533165], [ 0.10820313, 0.49978937], [ 0.34144279, 0.94630077]]]) print ("image2vector(image) = " + str(image2vector(image))) """ Explanation: Expected Output: <table> <tr> <td> **sigmoid_derivative([1,2,3])**</td> <td> [ 0.19661193 0.10499359 0.04517666] </td> </tr> </table> 1.3 - Reshaping arrays Two common numpy functions used in deep learning are np.shape and np.reshape(). - X.shape is used to get the shape (dimension) of a matrix/vector X. - X.reshape(...) is used to reshape X into some other dimension. For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector. <img src="images/image2vector_kiank.png" style="width:500px;height:300;"> Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do: python v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c - Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc. End of explanation """ # GRADED FUNCTION: normalizeRows def normalizeRows(x): """ Implement a function that normalizes each row of the matrix x (to have unit length). Argument: x -- A numpy matrix of shape (n, m) Returns: x -- The normalized (by row) numpy matrix. You are allowed to modify x. """ ### START CODE HERE ### (≈ 2 lines of code) # Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True) x_norm = np.linalg.norm(x ,ord = 2, axis = 1, keepdims = True) # Divide x by its norm. x = x/x_norm ### END CODE HERE ### return x x = np.array([ [0, 3, 4], [1, 6, 4]]) print("normalizeRows(x) = " + str(normalizeRows(x))) """ Explanation: Expected Output: <table style="width:100%"> <tr> <td> **image2vector(image)** </td> <td> [[ 0.67826139] [ 0.29380381] [ 0.90714982] [ 0.52835647] [ 0.4215251 ] [ 0.45017551] [ 0.92814219] [ 0.96677647] [ 0.85304703] [ 0.52351845] [ 0.19981397] [ 0.27417313] [ 0.60659855] [ 0.00533165] [ 0.10820313] [ 0.49978937] [ 0.34144279] [ 0.94630077]]</td> </tr> </table> 1.4 - Normalizing rows Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm). For example, if $$x = \begin{bmatrix} 0 & 3 & 4 \ 2 & 6 & 4 \ \end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix} 5 \ \sqrt{56} \ \end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix} 0 & \frac{3}{5} & \frac{4}{5} \ \frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \ \end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5. Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1). End of explanation """ # GRADED FUNCTION: softmax def softmax(x): """Calculates the softmax for each row of the input x. Your code should work for a row vector and also for matrices of shape (n, m). Argument: x -- A numpy matrix of shape (n,m) Returns: s -- A numpy matrix equal to the softmax of x, of shape (n,m) """ ### START CODE HERE ### (≈ 3 lines of code) # Apply exp() element-wise to x. Use np.exp(...). x_exp = np.exp(x) # Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True). x_sum = np.sum(x_exp,axis=1,keepdims=True) # Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting. s = x_exp/x_sum ### END CODE HERE ### return s x = np.array([ [9, 2, 5, 0, 0], [7, 5, 0, 0 ,0]]) print("softmax(x) = " + str(softmax(x))) """ Explanation: Expected Output: <table style="width:60%"> <tr> <td> **normalizeRows(x)** </td> <td> [[ 0. 0.6 0.8 ] [ 0.13736056 0.82416338 0.54944226]]</td> </tr> </table> Note: In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! 1.5 - Broadcasting and the softmax function A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation. Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization. Instructions: - $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix} x_1 && x_2 && ... && x_n \end{bmatrix}) = \begin{bmatrix} \frac{e^{x_1}}{\sum_{j}e^{x_j}} && \frac{e^{x_2}}{\sum_{j}e^{x_j}} && ... && \frac{e^{x_n}}{\sum_{j}e^{x_j}} \end{bmatrix} $ $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix} x_{11} & x_{12} & x_{13} & \dots & x_{1n} \ x_{21} & x_{22} & x_{23} & \dots & x_{2n} \ \vdots & \vdots & \vdots & \ddots & \vdots \ x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn} \end{bmatrix} = \begin{bmatrix} \frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \ \frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \ \vdots & \vdots & \vdots & \ddots & \vdots \ \frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}} \end{bmatrix} = \begin{pmatrix} softmax\text{(first row of x)} \ softmax\text{(second row of x)} \ ... \ softmax\text{(last row of x)} \ \end{pmatrix} $$ End of explanation """ import time x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0] x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0] ### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ### tic = time.process_time() dot = 0 for i in range(len(x1)): dot+= x1[i]*x2[i] toc = time.process_time() print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC OUTER PRODUCT IMPLEMENTATION ### tic = time.process_time() outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros for i in range(len(x1)): for j in range(len(x2)): outer[i,j] = x1[i]*x2[j] toc = time.process_time() print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC ELEMENTWISE IMPLEMENTATION ### tic = time.process_time() mul = np.zeros(len(x1)) for i in range(len(x1)): mul[i] = x1[i]*x2[i] toc = time.process_time() print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ### W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array tic = time.process_time() gdot = np.zeros(W.shape[0]) for i in range(W.shape[0]): for j in range(len(x1)): gdot[i] += W[i,j]*x1[j] toc = time.process_time() print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0] x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0] ### VECTORIZED DOT PRODUCT OF VECTORS ### tic = time.process_time() dot = np.dot(x1,x2) toc = time.process_time() print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED OUTER PRODUCT ### tic = time.process_time() outer = np.outer(x1,x2) toc = time.process_time() print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED ELEMENTWISE MULTIPLICATION ### tic = time.process_time() mul = np.multiply(x1,x2) toc = time.process_time() print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED GENERAL DOT PRODUCT ### tic = time.process_time() dot = np.dot(W,x1) toc = time.process_time() print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") """ Explanation: Expected Output: <table style="width:60%"> <tr> <td> **softmax(x)** </td> <td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04 1.21052389e-04] [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04 8.01252314e-04]]</td> </tr> </table> Note: - If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting. Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning. <font color='blue'> What you need to remember: - np.exp(x) works for any np.array x and applies the exponential function to every coordinate - the sigmoid function and its gradient - image2vector is commonly used in deep learning - np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. - numpy has efficient built-in functions - broadcasting is extremely useful 2) Vectorization In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product. End of explanation """ # GRADED FUNCTION: L1 def L1(yhat, y): """ Arguments: yhat -- vector of size m (predicted labels) y -- vector of size m (true labels) Returns: loss -- the value of the L1 loss function defined above """ ### START CODE HERE ### (≈ 1 line of code) loss = np.sum(np.abs(y-yhat)) ### END CODE HERE ### return loss yhat = np.array([.9, 0.2, 0.1, .4, .9]) y = np.array([1, 0, 0, 1, 1]) print("L1 = " + str(L1(yhat,y))) """ Explanation: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication. 2.1 Implement the L1 and L2 loss functions Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful. Reminder: - The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost. - L1 loss is defined as: $$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$ End of explanation """ # GRADED FUNCTION: L2 def L2(yhat, y): """ Arguments: yhat -- vector of size m (predicted labels) y -- vector of size m (true labels) Returns: loss -- the value of the L2 loss function defined above """ ### START CODE HERE ### (≈ 1 line of code) loss = np.sum((y-yhat)**2) ### END CODE HERE ### return loss yhat = np.array([.9, 0.2, 0.1, .4, .9]) y = np.array([1, 0, 0, 1, 1]) print("L2 = " + str(L2(yhat,y))) """ Explanation: Expected Output: <table style="width:20%"> <tr> <td> **L1** </td> <td> 1.1 </td> </tr> </table> Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$. L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$ End of explanation """
ktaneishi/deepchem
examples/notebooks/Conditional_GAN.ipynb
mit
import deepchem as dc import numpy as np import tensorflow as tf n_classes = 4 class_centers = np.random.uniform(-4, 4, (n_classes, 2)) class_transforms = [] for i in range(n_classes): xscale = np.random.uniform(0.5, 2) yscale = np.random.uniform(0.5, 2) angle = np.random.uniform(0, np.pi) m = [[xscale*np.cos(angle), -yscale*np.sin(angle)], [xscale*np.sin(angle), yscale*np.cos(angle)]] class_transforms.append(m) class_transforms = np.array(class_transforms) """ Explanation: Conditional Generative Adversarial Network Note: This example implements a GAN from scratch. The same model could be implemented much more easily with the dc.models.GAN class. See the MNIST GAN notebook for an example of using that class. It can still be useful to know how to implement a GAN from scratch for advanced situations that are beyond the scope of what the standard GAN class supports. A Generative Adversarial Network (GAN) is a type of generative model. It consists of two parts called the "generator" and the "discriminator". The generator takes random values as input and transforms them into an output that (hopefully) resembles the training data. The discriminator takes a set of samples as input and tries to distinguish the real training samples from the ones created by the generator. Both of them are trained together. The discriminator tries to get better and better at telling real from false data, while the generator tries to get better and better at fooling the discriminator. A Conditional GAN (CGAN) allows additional inputs to the generator and discriminator that their output is conditioned on. For example, this might be a class label, and the GAN tries to learn how the data distribution varies between classes. For this example, we will create a data distribution consisting of a set of ellipses in 2D, each with a random position, shape, and orientation. Each class corresponds to a different ellipse. Let's randomly generate the ellipses. End of explanation """ def generate_data(n_points): classes = np.random.randint(n_classes, size=n_points) r = np.random.random(n_points) angle = 2*np.pi*np.random.random(n_points) points = (r*np.array([np.cos(angle), np.sin(angle)])).T points = np.einsum('ijk,ik->ij', class_transforms[classes], points) points += class_centers[classes] return classes, points """ Explanation: This function generates random data from the distribution. For each point it chooses a random class, then a random position in that class' ellipse. End of explanation """ %matplotlib inline import matplotlib.pyplot as plot classes, points = generate_data(1000) plot.scatter(x=points[:,0], y=points[:,1], c=classes) """ Explanation: Let's plot a bunch of random points drawn from this distribution to see what it looks like. Points are colored based on their class label. End of explanation """ import deepchem.models.tensorgraph.layers as layers model = dc.models.TensorGraph(learning_rate=1e-4, use_queue=False) # Inputs to the model random_in = layers.Feature(shape=(None, 10)) # Random input to the generator generator_classes = layers.Feature(shape=(None, n_classes)) # The classes of the generated samples real_data_points = layers.Feature(shape=(None, 2)) # The training samples real_data_classes = layers.Feature(shape=(None, n_classes)) # The classes of the training samples is_real = layers.Weights(shape=(None, 1)) # Flags to distinguish real from generated samples # The generator gen_in = layers.Concat([random_in, generator_classes]) gen_dense1 = layers.Dense(30, in_layers=gen_in, activation_fn=tf.nn.relu) gen_dense2 = layers.Dense(30, in_layers=gen_dense1, activation_fn=tf.nn.relu) generator_points = layers.Dense(2, in_layers=gen_dense2) model.add_output(generator_points) # The discriminator all_points = layers.Concat([generator_points, real_data_points], axis=0) all_classes = layers.Concat([generator_classes, real_data_classes], axis=0) discrim_in = layers.Concat([all_points, all_classes]) discrim_dense1 = layers.Dense(30, in_layers=discrim_in, activation_fn=tf.nn.relu) discrim_dense2 = layers.Dense(30, in_layers=discrim_dense1, activation_fn=tf.nn.relu) discrim_prob = layers.Dense(1, in_layers=discrim_dense2, activation_fn=tf.sigmoid) """ Explanation: Now let's create the model for our CGAN. End of explanation """ # Discriminator discrim_real_data_loss = -layers.Log(discrim_prob+1e-10) * is_real discrim_gen_data_loss = -layers.Log(1-discrim_prob+1e-10) * (1-is_real) discrim_loss = layers.ReduceMean(discrim_real_data_loss + discrim_gen_data_loss) discrim_submodel = model.create_submodel(layers=[discrim_dense1, discrim_dense2, discrim_prob], loss=discrim_loss) # Generator gen_loss = -layers.ReduceMean(layers.Log(discrim_prob+1e-10) * (1-is_real)) gen_submodel = model.create_submodel(layers=[gen_dense1, gen_dense2, generator_points], loss=gen_loss) """ Explanation: We'll use different loss functions for training the generator and discriminator. The discriminator outputs its predictions in the form of a probability that each sample is a real sample (that is, that it came from the training set rather than the generator). Its loss consists of two terms. The first term tries to maximize the output probability for real data, and the second term tries to minimize the output probability for generated samples. The loss function for the generator is just a single term: it tries to maximize the discriminator's output probability for generated samples. For each one, we create a "submodel" specifying a set of layers that will be optimized based on a loss function. End of explanation """ batch_size = model.batch_size discrim_error = [] gen_error = [] for step in range(20000): classes, points = generate_data(batch_size) class_flags = dc.metrics.to_one_hot(classes, n_classes) feed_dict={random_in: np.random.random((batch_size, 10)), generator_classes: class_flags, real_data_points: points, real_data_classes: class_flags, is_real: np.concatenate([np.zeros((batch_size,1)), np.ones((batch_size,1))])} discrim_error.append(model.fit_generator([feed_dict], submodel=discrim_submodel, checkpoint_interval=0)) if step%2 == 0: gen_error.append(model.fit_generator([feed_dict], submodel=gen_submodel, checkpoint_interval=0)) if step%1000 == 999: print(step, np.mean(discrim_error), np.mean(gen_error)) discrim_error = [] gen_error = [] """ Explanation: Now to fit the model. Here are some important points to notice about the code. We use fit_generator() to train only a single batch at a time, and we alternate between the discriminator and the generator. That way. both parts of the model improve together. We only train the generator half as often as the discriminator. On this particular model, that gives much better results. You will often need to adjust (# of discriminator steps)/(# of generator steps) to get good results on a given problem. We disable checkpointing by specifying checkpoint_interval=0. Since each call to fit_generator() includes only a single batch, it would otherwise save a checkpoint to disk after every batch, which would be very slow. If this were a real project and not just an example, we would want to occasionally call model.save_checkpoint() to write checkpoints at a reasonable interval. End of explanation """ classes, points = generate_data(1000) feed_dict = {random_in: np.random.random((1000, 10)), generator_classes: dc.metrics.to_one_hot(classes, n_classes)} gen_points = model.predict_on_generator([feed_dict]) plot.scatter(x=gen_points[:,0], y=gen_points[:,1], c=classes) """ Explanation: Have the trained model generate some data, and see how well it matches the training distribution we plotted before. End of explanation """
probml/pyprobml
deprecated/bernoulli_hmm_example.ipynb
mit
!pip install git+git://github.com/lindermanlab/ssm-jax-refactor.git import ssm """ Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/bernoulli_hmm_example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Bernoulli HMM Example Notebook Modified from https://github.com/lindermanlab/ssm-jax-refactor/blob/main/notebooks/bernoulli-hmm-example.ipynb End of explanation """ import jax.random as jr import jax.numpy as np import matplotlib.pyplot as plt from tensorflow_probability.substrates import jax as tfp from ssm.hmm import BernoulliHMM from ssm.plots import gradient_cmap from ssm.utils import find_permutation import warnings import seaborn as sns sns.set_style("white") sns.set_context("talk") color_names = ["windows blue", "red", "amber", "faded green", "dusty purple", "orange"] colors = sns.xkcd_palette(color_names) cmap = gradient_cmap(colors) def plot_transition_matrix(transition_matrix): plt.imshow(transition_matrix, vmin=0, vmax=1, cmap="Greys") plt.xlabel("next state") plt.ylabel("current state") plt.colorbar() plt.show() def compare_transition_matrix(true_matrix, test_matrix): fig, axs = plt.subplots(1, 2, figsize=(10, 5)) out = axs[0].imshow(true_matrix, vmin=0, vmax=1, cmap="Greys") axs[1].imshow(test_matrix, vmin=0, vmax=1, cmap="Greys") axs[0].set_title("True Transition Matrix") axs[1].set_title("Test Transition Matrix") cax = fig.add_axes( [ axs[1].get_position().x1 + 0.07, axs[1].get_position().y0, 0.02, axs[1].get_position().y1 - axs[1].get_position().y0, ] ) plt.colorbar(out, cax=cax) plt.show() def plot_hmm_data(obs, states): lim = 1.01 * abs(obs).max() time_bins, obs_dim = obs.shape plt.figure(figsize=(8, 3)) plt.imshow( states[None, :], aspect="auto", cmap=cmap, vmin=0, vmax=len(colors) - 1, extent=(0, time_bins, -lim, (obs_dim) * lim), ) for d in range(obs_dim): plt.plot(obs[:, d] + lim * d, "-k") plt.xlim(0, time_bins) plt.xlabel("time") plt.yticks(lim * np.arange(obs_dim), ["$x_{}$".format(d + 1) for d in range(obs_dim)]) plt.title("Simulated data from an HMM") plt.tight_layout() def plot_posterior_states(Ez, states, perm): plt.figure(figsize=(25, 5)) plt.imshow(Ez.T[perm], aspect="auto", interpolation="none", cmap="Greys") plt.plot(states, label="True State") plt.plot(Ez.T[perm].argmax(axis=0), "--", label="Predicted State") plt.xlabel("time") plt.ylabel("latent state") # plt.legend(bbox_to_anchor=(1,1)) plt.title("Predicted vs. Ground Truth Latent State") # plt.show() """ Explanation: Imports and Plotting Functions End of explanation """ num_states = 5 num_channels = 10 transition_matrix = 0.90 * np.eye(num_states) + 0.10 * np.ones((num_states, num_states)) / num_states true_hmm = BernoulliHMM( num_states, num_emission_dims=num_channels, transition_matrix=transition_matrix, seed=jr.PRNGKey(0) ) plot_transition_matrix(true_hmm.transition_matrix) """ Explanation: Bernoulli HMM Let's create a true model End of explanation """ rng = jr.PRNGKey(0) num_timesteps = 500 states, data = true_hmm.sample(rng, num_timesteps) """ Explanation: From the true model, we can sample synthetic data End of explanation """ fig, axs = plt.subplots(2, 1, sharex=True, figsize=(20, 8)) axs[0].imshow(data.T, aspect="auto", interpolation="none") # axs[0].set_ylabel("neuron") axs[0].set_title("Observations") axs[1].plot(states) axs[1].set_title("Latent State") axs[1].set_xlabel("time") axs[1].set_ylabel("state") plt.savefig("bernoulli-hmm-data.pdf") plt.savefig("bernoulli-hmm-data.png") plt.show() """ Explanation: Let's view the synthetic data End of explanation """ test_hmm = BernoulliHMM(num_states, num_channels, seed=jr.PRNGKey(32)) lps, test_hmm, posterior = test_hmm.fit(data, method="em", tol=-1) # Plot the log probabilities plt.plot(lps) plt.xlabel("iteration") plt.ylabel("log likelihood") test_hmm.transition_matrix # Compare the transition matrices compare_transition_matrix(true_hmm.transition_matrix, test_hmm.transition_matrix) plt.savefig("bernoulli-hmm-transmat-comparison.pdf") # Posterior distribution Ez = posterior.expected_states.reshape(-1, num_states) perm = find_permutation(states, np.argmax(Ez, axis=-1)) plot_posterior_states(Ez, states, perm) plt.savefig("bernoulli-hmm-state-est-comparison.pdf") plt.savefig("bernoulli-hmm-state-est-comparison.png") plt.show() """ Explanation: Fit HMM using exact EM update End of explanation """ rng = jr.PRNGKey(0) num_timesteps = 500 num_trials = 5 all_states, all_data = true_hmm.sample(rng, num_timesteps, num_samples=num_trials) # Now we have a batch dimension of size `num_trials` print(all_states.shape) print(all_data.shape) lps, test_hmm, posterior = test_hmm.fit(all_data, method="em", tol=-1) # plot marginal log probabilities plt.title("Marginal Log Probability") plt.ylabel("lp") plt.xlabel("idx") plt.plot(lps / data.size) compare_transition_matrix(true_hmm.transition_matrix, test_hmm.transition_matrix) # For the first few trials, let's see how good our predicted states are for trial_idx in range(3): print("=" * 5, f"Trial: {trial_idx}", "=" * 5) Ez = posterior.expected_states[trial_idx] states = all_states[trial_idx] perm = find_permutation(states, np.argmax(Ez, axis=-1)) plot_posterior_states(Ez, states, perm) """ Explanation: Fit Bernoulli Over Multiple Trials End of explanation """
mspieg/principals-appmath
PolynomialFun.ipynb
cc0-1.0
%matplotlib inline import numpy as np import scipy.linalg as la import matplotlib.pyplot as plt """ Explanation: <table> <tr align=left><td><img align=left src="./images/CC-BY.png"> <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td> </table> End of explanation """ # lambda functions for each of the basis functions p0 = lambda x: np.ones(x.shape) p1 = lambda x: x p2 = lambda x: x**2 # lambda function for the matrix whose columns are p_i(x) A = lambda x: np.array([ p0(x), p1(x), p2(x)]).transpose() # lambda function for any vector in P_2, v = c[0]*p0 + c[1]*p1 + c[2]*p2 v = lambda c,x : np.dot(A(x),c) x = np.array([-1.,0.,1.]) print p0(x),p1(x),p2(x) print A(x) c = np.array([1,2,-1]) print v(c,x) """ Explanation: Fun with polynomials GOAL: Explore the ideas of Interpolation, Least Squares fitting and projection of continous functions onto the function space $P_2[-1,1]$ The Space $P_2[-1,1]$ Consider the space of all second order polynomials on the closed interval $x\in[-1,1]$ which is a subspace of continuous functions $C^0[-1,1]$. To completely describe a vector space we need a basis: a set of linear independent vectors that span the space. While there are many possible bases for $P_2[-1,1]$, here we will consider the simplest monomial basis $p_0(x)=1$, $p_1(x)=x$, $p_2(x)=x^2$ or $$ P_2(x)[-1,1] = \mathrm{span}<1,x,x^2> $$ i.e. every vector in $P_2$ can be written as a linear combination of the basis vectors as $$ f(x) = c_0p_0 + c_1p_1 + c_2p_2 = c_0 + c_1 x + c_2x^2 $$ The space P_2(x)[-1,1] is said to be isomorphic to $R^3$ as every vector in $P_2$ can be associated with a unique vector in $R^3$ $$ \mathbf{c}= [ c_0, c_1, c_2]^T $$ here we will set up a bit of python to evaluate polynomials End of explanation """ x = np.linspace(-1,1) plt.figure() plt.plot(x,p0(x),label='$p_0$') plt.hold(True) plt.plot(x,p1(x),label='$p_1$') plt.plot(x,p2(x),label='$p_2$') plt.xlabel('x') plt.ylim(-1.5,1.5) plt.legend(loc='best') plt.grid() plt.show() """ Explanation: and plot them End of explanation """ x = np.array([-1.,0.,1.]) f = np.array([0.,2.,-1.]) c = la.solve(A(x),f) # and plot it out xx = np.linspace(-1,1) # use well sampled space for plotting the quadratic plt.figure() # plot the parabola plt.plot(xx,v(c,xx),'r-') # plot the interpolating points plt.plot(x,f,'bo') plt.xlabel('x') plt.ylabel('$f(x)$') plt.ylim(-1.5,2.5) plt.title('$c={}$: $v ={}p_0 + {}p_1 + {}p_2$'.format(c,c[0],c[1],c[2])) plt.grid() plt.show() """ Explanation: Now let's find the interpolating polynomial that goes through exactly three points. $f(-1)=0$, $f(0)=2$, $f(1)=-1$ by solving the invertible system of linear equations $$ [\, p_0(x)\quad p_1(x)\quad p_2(x)\, ] \mathbf{c} = f(x) $$ for the three points in $x=[-1,0,1]^T$ End of explanation """ # choose 7 evenly spaced points in [-1,1] x = np.linspace(-1,1,7) # perturb the parabola with uniform random noise f = v(c,x) + np.random.uniform(-.5,.5,len(x)) # and plot with respect to the underlying parabola plt.figure() plt.plot(x,f,'bo') plt.hold(True) plt.plot(xx,v(c,xx),'r',label='v') plt.xlabel('x') plt.ylim(-1.5,2.5) plt.grid() # now calculate and plot the leastsquares solution to Ac = f c_ls,res,rank,s = la.lstsq(A(x),f) plt.plot(xx,v(c_ls,xx),'g',label='v_lstsq') plt.title('$c={}$: $v={}p_0 + {}p_1 + {}p_2$'.format(c_ls,c_ls[0],c_ls[1],c_ls[2])) plt.legend(loc='best') plt.show() # and show that this is the same solution we would get if we tried to solve the normal equations direction AtA = np.dot(A(x).transpose(),A(x)) Atf = np.dot(A(x).transpose(),f) c_norm = la.solve(AtA,Atf) print 'numpy least-squares c = {}'.format(c_ls) print 'normal equations = {}'.format(c_norm) print 'difference = {}'.format(c_ls-c_norm) print print 'ATA ={}'.format(AtA) """ Explanation: Least Squares problems: Given the value of a function at any three distinct points is sufficient to describe uniquely the interpolating quadratic through those points. But suppose we were given more than 3 points, say 7, in which case the matrix $A$ would be $7\times3$ with rank $r=3$ and unless those 7 points were on the same parabola, there would be no solution to the overdetermined problem. Here we will create that problem by adding more points to the interpolating parabola calculated above and then perturb it with uniform random noise. End of explanation """ # calculate the error vector e = f - v(c_ls,x) print 'error vector\n e={}\n'.format(e) # and calculate the matrix vector product A^T e print 'A^T e = {}'.format(np.dot(A(x).transpose(),e)) """ Explanation: Errors Now let's show that the error $e= f(x) - A(x)c$ is orthogonal to the column space of $A$ i.e. $A^T e = 0$ End of explanation """ # set the function to be projected f = lambda x : np.cos(2*x) + np.sin(1.5*x) # calculate the interpolation of f onto P2, when sampled at points -1,0,1 x = np.array([-1., 0., 1.]) c_interp = la.solve(A(x),f(x)) """ Explanation: Projection of a function onto $P_2[-1,1]$ Now let's extend this problem to finding the best fit projection of a continuous function $f(x)$ onto $P_2$. While we could extend the previous approach by sampling $f(x)$ at a large number of points and calculating the least-squares solution, we can also solve the continuous problem by changing the definition of the inner product from the dot product in $R^n$ to the inner product for continuous functions $$ <f,g> = \int_{-1}^{1} fg dx $$ However the overall approach remains the same as the discrete least squares problem. If we now consider a function $u \in P_2[-1,1]$ such that $$ u(x) = \sum_i c_i p_i(x) $$ then the continous error (or residual) is given by $$ e(x) = u(x) - f(x) $$ for the continuous variable $x\in[-1,1]$. The least square problem now becomes "find $\mathbf{c}\in R^3$ that minimizes $||e||_{L2}$", i.e. the "length" of $e$ in the $L^2$ norm. Alternatively this requires that the error $e(x)$ is orthogonal to all the basis vectors in $P_2$, i.e. $$ <p_i,e> = 0 \quad \mathrm{for\, }i=0,1,2 $$ or $$ \int_{-1}^{1} p_i e dx = \int_{-1}^{1} p_i ( u - f) dx = 0 $$ or solve $$ \int_{-1}^{1} p_i \left(\sum_j c_j p_j(x)\right)dx = \int_{-1}^{1} p_i f dx $$ for all $i,j=0,1,2$. Rearranging the summation and the integral sign, we can rewrite the problem as $$ \sum_j M_{ij} c_j = \hat{f}_i $$ where $$ M_{ij} = <p_i,p_j>=\int_{-1}^{1} p_i p_j dx\quad \mathrm{and}\quad \hat{f}i = <p_i,f> = \int{-1}^{1} p_i f dx $$ or in matrix vector notation $M\mathbf{c} = \hat{\mathbf{f}}$ where $M$ is the "mass-matrix (and corresponds to the symmetric matrix $A^TA$) and $\hat{\mathbf{f}}$ is the "load vector" which corresponds to $A^t\mathbf{b}$. For the simple monomial basis, we can calculate the terms of $M$ easily, but here we will just use scipy's numerical quadrature routines We'll start by defining our function and calculating its interpolation onto $P_2[-1,1]$ as the unique quadratic that interpolates $f(x)$ at $x=[-1,0,1]$ End of explanation """ from scipy.integrate import quad def mij(i,j,x): """ integrand for component Mij of the mass matrix""" p = np.array([1., x, x**2]) return p[i]*p[j] def fi(i,x,f): """ integrand for component i of the load vector""" p = np.array([1., x, x**2]) return p[i]*f(x) # construct the symmetric mass matrix M_ij = <p_i,p_j> M = np.zeros((3,3)) fhat = np.zeros(3) R = np.zeros((3,3)) # quadrature residuals # loop over the upper triangular elements of M (and fill in the symmetric parts) for i in range(0,3): fhat[i] = quad(lambda x: fi(i,x,f),-1.,1.)[0] for j in range(i,3): result = quad(lambda x: mij(i,j,x),-1.,1.) M[i,j] = result[0] M[j,i] = M[i,j] R[i,j] = result[1] R[j,i] = R[i,j] print 'M = {}\n'.format(M) print 'fhat = {}\n'.format(fhat) # and solve for c c_galerkin = la.solve(M,fhat) print 'c_galerkin ={}'.format(c_galerkin) """ Explanation: Now calculate the mass matrix and load vector and solve for the galerkin projection of $f$ onto $P_2[-1,1]$ End of explanation """ # now plot them all out and compare plt.figure() plt.plot(xx,f(xx),'r',label='$f(x)$') plt.hold(True) plt.plot(x,f(x),'ro') plt.plot(xx,v(c_interp,xx),'g',label='$f_{interp}(x)$') plt.plot(xx,v(c_galerkin,xx),'b',label='$u(x)$') plt.xlabel('x') plt.grid() plt.legend(loc='best') plt.show() """ Explanation: And let's just plot out the three function $f(x)$, $f_{interp}(x)$ it's interpolant, and $u(x)$ it's projection onto $P_2[-1,1]$ End of explanation """
seg/2016-ml-contest
MandMs/03_Facies_classification_MandMs_feature_engineering_derivatives_moments_glcms_nofacies_data.ipynb
apache-2.0
# import data and filling missing PE values with average filename = 'nofacies_data.csv' training_data = pd.read_csv(filename) training_data['PE'].fillna((training_data['PE'].mean()), inplace=True) print np.shape(training_data) training_data['PE'].fillna((training_data['PE'].mean()), inplace=True) print np.shape(training_data) pd.set_option('display.float_format', lambda x: '%.4f' % x) training_data.describe() """ Explanation: Feature engineering <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The code and ideas to engineer new features used in this notebook, </span> by <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">Matteo Niccoli and Mark Dahl, with contributions by Daniel Kittridge,</span> are licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. 1 - Clean up and rescale data End of explanation """ # standardize features to go into moments calculation feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth'], axis=1) scaler = preprocessing.StandardScaler().fit(feature_vectors) scaled_features = scaler.transform(feature_vectors) scaled_vectors_df = pd.DataFrame(scaled_features, columns=list(feature_vectors)) scaled_feat_df = pd.concat((training_data[['Depth', 'Well Name', 'Formation']], scaled_vectors_df),1) scaled_feat_df.head() scaled_feat_df.shape """ Explanation: To keep feature importance on a level playing field, we will rescale each WL log before calculating moments. We will use sklearn.preprocessing.StandardScaler. End of explanation """ # calculate all 1st and 2nd derivative for all logs, for all wells derivative_df = pd.DataFrame() # final dataframe grouped = training_data['Well Name'].unique() for well in grouped: # for each well new_df = pd.DataFrame() # make a new temporary dataframe for log in ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND' ,'PE']: d1 = np.array(np.gradient(scaled_feat_df[log][scaled_feat_df['Well Name'] == well])) d2 = np.array(np.gradient(np.gradient(scaled_feat_df[log][scaled_feat_df['Well Name'] == well]))) # write to temporary dataframe new_df[str(log) + '_d1'] = d1 new_df[str(log) + '_d2'] = d2 # append all rows of temporary dataframe to final dataframe derivative_df = pd.concat([derivative_df, new_df]) derivative_df.describe() """ Explanation: 2 - Calculate derivatives The rate of change of a function of series of values is commonly used as a booster for machine learning classifiers. We will calculate the first and second derivatives for each WL log curve in each well. End of explanation """ # import file filename = 'facies_vectors.csv' data = pd.read_csv(filename) # eliminate nulls PE_mask = data['PE'].notnull().values data = data[PE_mask] # get facies y = data['Facies'].values # get longest facies max_len = max(len(list(s)) for (c,s) in itertools.groupby(y)) max_len # function to create a geometric series of window sizes # using powers of 2 up to one just above a reference geological size (longest facies) def geom_windows(max_sz): """returns a list of square window sizes using powers of two""" return list(int(2**(n+1)+1) for n in np.arange(np.ceil(np.log2(max_sz)))) # window sizes sizes = geom_windows(max_len) sizes """ Explanation: 3 - Create a list of geometrically-expanding windows for rolling features Facies are interpreted groupings of rocks and commonly composed of a several rock elements, each demonstrating different properties. Therefore, we should expect to see a distribution of WL log responses for each facies. A corollary of this is that attempting to directly solve for a facies with WL log responses at any given depth will be tenuous. Facies require a context; a context provided by the surrounding rock. Likewise, if we are to effectively solve for facies from WL logs, we should provide a context to for each response at a given depth. We can accomplish this with rolling windows. A rolling window provides a local neighbourhood of values about a central point, which can be stepped through an array of values. The neighbourhood sample size, which is the depth thickness/sampling rate, of the neighbourhood evaluated should relate directly to the thickness of a facies. Because facies are observed with different thicknesses, we will build neighbourhoods to include the thickest observed facies. To keep the number of rolling windows reasonable, we will use a geometric function where the half window length is doubled for each subsequent value. End of explanation """ # Efficient rolling statistics with NumPy # http://www.rigtorp.se/2011/01/01/rolling-statistics-numpy.html def rolling_window(a, window): shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) # function to calculate moments using a rolling window def rollin_moments(arr, w, moment ='mean'): """- pad input array by (w-1)/2 samples at the top and bottom - apply rolling window function - calculate moment: mean (default), var, or skew""" mom = [] arr = np.pad(arr, ((w-1)/2, (w-1)/2), 'edge') if moment == 'std': return np.array(np.std(rolling_window(arr, w), 1)) elif moment == 'skew': return np.array(sp.stats.skew(rolling_window(arr, w), 1)) else: return np.array(np.mean(rolling_window(arr, w), 1)) moments = ['mean', 'std', 'skew'] # calculate all moments for all logs, for all wells moments_df = pd.DataFrame() # final dataframe grouped = training_data['Well Name'].unique() for well in grouped: # for each well new_df = pd.DataFrame() # make a new temporary dataframe for log in ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND' ,'PE']: for mo in moments: # for each moment # calculate the rolling moments with each window size # and also the mean of moments (all window sizes) results = np.array([rollin_moments(scaled_feat_df[log][scaled_feat_df['Well Name'] == well], size, moment = mo) for size in sizes]) mean_result = np.mean(results, axis=0) # write to temporary dataframe new_df[str(log)+ '_' + str(mo)+'_wsize=' +str(sizes[0])] = results[0] new_df[str(log)+ '_' + str(mo)+'_wsize=' +str(sizes[1])] = results[1] new_df[str(log)+ '_' + str(mo)+'_wsize=' +str(sizes[2])] = results[2] new_df[str(log)+ '_' + str(mo)+'_wsize=' +str(sizes[3])] = results[3] new_df[str(log)+ '_' + str(mo)+'_wsize=' +str(sizes[4])] = results[4] new_df[str(log)+ '_' + str(mo)+'_wsize=' +str(sizes[5])] = results[5] new_df[str(log)+ '_' + str(mo)+'_wsize=' +str(sizes[6])] = results[6] new_df[str(log)+ '_' + str(mo)+'_wsize=ave'] = mean_result # append all rows of temporary dataframe to final dataframe moments_df = pd.concat([moments_df, new_df]) moments_df.describe() """ Explanation: 4 - Moments feature generation The simplest and most fundamental way to numerically describe the shape of a distribution of values is using moments. The first moment, mean $\mu$, characterizes the central tendency of the distribution. The second moment, variance $\sigma^2$, characterizes the spread of the values about the central tendency. The third moment, skewness $\gamma_1$, characterizes the symmetry (or lack thereof) about the central tendency. We will calculate the first three moments (with one small modification) for each rolling window size at every depth. The small modification is that instead of variance $\sigma^2$, we are calculating standard deviation $\sigma$ because the results of variance $\sigma^2$ produce values with units of the mean squared $\mu^2$. As a result, feature importance of variance is artificially high due to the dimension of the variance values. Standard deviation $\sigma$ has the same dimension as mean $\mu$. With respect to facies prediction, now, in addition to the raw WL log inputs, we will describe at multiple scales the shapes of the distributions of WL log responses associated with each facies. End of explanation """ # function to calculate glcm and greycoprops using a rolling window def gprops_calc(arr, w, lv, sym = True, prop='dissimilarity'): """- make w copies of the input array, roll it up one row at a time - calculate glcm on a square window of size w - calculate greycoprops from glcm: dissimilarity (default), energy, or correlation - repeat until back at row one N.B. the input array is padded by (w-1)/2 samples at the top and bottom""" diss = [] itr = len(arr) arr = np.pad(arr, ((w-1)/2, (w-1)/2), 'edge') s = np.array([arr,]*w,dtype=np.uint8).transpose() for _ in np.arange(itr): if sym == True: glcm = greycomatrix(s[:w,:], [1], [np.pi/2], levels = lv, symmetric = True, normed = True) else: glcm = greycomatrix(s[:w,:], [1], [np.pi/2], levels = lv, symmetric = False, normed = True) if prop == 'correlation': ds = greycoprops(glcm, 'correlation') elif prop == 'energy': ds = greycoprops(glcm, 'energy') else: ds = greycoprops(glcm, 'dissimilarity') diss.append(ds) s = np.roll(s[:, :], -w) return np.ndarray.flatten(np.array(diss)) methods = ['dissimilarity','energy', 'correlation'] """ Explanation: 5 - GLCM feature generation Statistical moments can be said to characterize the composition of a neighbourhood of values. However, we can easily describe two neighbourhoods with identical composition that are distinctly different. For example N1 = [00001111] and N2 = [01010101] have exactly the same mean $\mu$, variance $\sigma^2$, and skewness $\gamma_1$, but, in terms of rocks, might represent different facies. Therefore, in addition to describing the shape of a distribution of values for a facies, we need something to evaluate the ordering of those values. That something is a grey-level coocurrence matrix (GLCM). A GLCM is a second order statistical method that numerically describes ordering of elements by evaluating the probability of values to be neighbours. Think of the GLCM as a histogram that preserves the ordering of values. For more about the GLCM, see Mryka Hall-Beyer's tutorial and read skimage.feature.greycomatrix documentation. Just as we calculated moments to describe the shape of a histogram, we need to represent the arrangement of values in a GLCM with a single value. Properties that capture different characteristics of a GLCM including contrast, dissimilarity, homogeneity, ASM, energy, and correlation can be calculated with skimage.feature.greycoprops. To keep resulting dimensions equivalent to the moments previously calculated, we will use the properties dissimilarity, energy, and correlation. End of explanation """ # functions to equalize histogram of features to go into GLCM calculation def eqlz(arr, bins): return (bins-1) * exposure.equalize_hist(arr) def eqlz_along_axis(arr, bins): return np.apply_along_axis(eqlz, 0, arr, bins) # equalize features feature_vectors_glcm = training_data.drop(['Formation', 'Well Name', 'Depth'], axis=1) eq_vectors_glcm = eqlz_along_axis(feature_vectors_glcm, 64) eq_vectors_glcm_df = pd.DataFrame(eq_vectors_glcm, columns=list(feature_vectors_glcm)) eq_vectors_glcm_df = np.round(eq_vectors_glcm_df).astype(int) eq_glcm_df = pd.concat((training_data[['Depth', 'Well Name', 'Formation']], eq_vectors_glcm_df),1) eq_glcm_df.head() """ Explanation: Similar to the step preceeding moments calculation, we will rescale the raw WL logs for GLCM property calculation so each resulting property is unaffected by the magnitude of the raw WL log values. skimage.feature.greycomatrix requires uint8 values, so we need an alternative to sklearn.preprocessing.StandardScaler. Unlike with calculating moments, preserving the shape of the histogram is not important to the integrity of a GLCM property. We will use histogram equalization, which flattens a histogram (puts an equal number of values in each bin). To maximize the effectiveness of a GLCM, it is commonly wise to reduce the bit depth from 8 to avoid processing expense and noise caused by empty matrix entries. After some trial and error, we found the 64 bins works nicely. Note that 64 bins results in a 64x64 matrix at every depth for every rolling window size. End of explanation """ glcm_sym_df = pd.DataFrame() # final dataframe grouped = training_data['Well Name'].unique() for well in grouped: # for each well new_dfg = pd.DataFrame() # make a new temporary dataframe for log in ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']: # for each log for me in methods: # for each property # calculate rolling GLCM properties with each window size # and also the mean of moments (all window sizes) lg = eq_glcm_df[log][eq_glcm_df['Well Name'] == well] results = np.array([gprops_calc(lg.astype(int), wd, lv = 64, sym = True, prop = me) for wd in sizes]) mean_result = np.mean(results, axis=0) # write to temporary dataframe new_dfg[str(log)+ '_GLCM_' + str(me)+'_wsize=' +str(sizes[0])] = results[0] new_dfg[str(log)+ '_GLCM_' + str(me)+'_wsize=' +str(sizes[1])] = results[1] new_dfg[str(log)+ '_GLCM_' + str(me)+'_wsize=' +str(sizes[2])] = results[2] new_dfg[str(log)+ '_GLCM_' + str(me)+'_wsize=' +str(sizes[3])] = results[3] new_dfg[str(log)+ '_GLCM_' + str(me)+'_wsize=' +str(sizes[4])] = results[4] new_dfg[str(log)+ '_GLCM_' + str(me)+'_wsize=' +str(sizes[5])] = results[5] new_dfg[str(log)+ '_GLCM_' + str(me)+'_wsize=' +str(sizes[6])] = results[6] new_dfg[str(log)+ '_GLCM_' + str(me)+'_wsize=ave'] = mean_result # append all rows of temporary dataframe to final dataframe glcm_sym_df = pd.concat([glcm_sym_df , new_dfg]) glcm_sym_df.describe() """ Explanation: One last consideration for the GLCM is its symmetry. Symmetry in a GLCM refers to a bi-directional evaluation of the reference-neighbour pair. In plain English, if you were to construct a GLCM by hand, you would move through an array in one direction and then in the opposite direction. It is often desirable to do this because this removes the asymmety caused at the edge of a neighbourhood. See Mryka Hall-Beyer's tutorial for a full explanation of this. However, since sedimentary rocks (provided that they are structurally undisturbed) are laid down from bottom to top, we thought in addition to the symmetric GLCM, it would be useful to evaluate the asymmetric GLCM where we look at the neighbour above. First let's calculate symmetric GLCM properties: End of explanation """ glcm_asym_df = pd.DataFrame() # final dataframe grouped1 = training_data['Well Name'].unique() for well1 in grouped1: # for each well new_dfg1 = pd.DataFrame() # make a new temporary dataframe for log1 in ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']: # for each log for me in methods: # for each property # calculate rolling GLCM properties with each window size # and also the mean of moments (all window sizes) lg1 = eq_glcm_df[log][eq_glcm_df['Well Name'] == well1] results1 = np.array([gprops_calc(lg1.astype(int), wd, lv = 64, sym = False, prop = me) for wd in sizes]) mean_result1 = np.mean(results1, axis=0) # write to temporary dataframe new_dfg1[str(log1)+ '_GLCM_' + str(me)+'_asym_wsize=' +str(sizes[0])] = results1[0] new_dfg1[str(log1)+ '_GLCM_' + str(me)+'_asym_wsize=' +str(sizes[1])] = results1[1] new_dfg1[str(log1)+ '_GLCM_' + str(me)+'_asym_wsize=' +str(sizes[2])] = results1[2] new_dfg1[str(log1)+ '_GLCM_' + str(me)+'_asym_wsize=' +str(sizes[3])] = results1[3] new_dfg1[str(log1)+ '_GLCM_' + str(me)+'_asym_wsize=' +str(sizes[4])] = results1[4] new_dfg1[str(log1)+ '_GLCM_' + str(me)+'_asym_wsize=' +str(sizes[5])] = results1[5] new_dfg1[str(log1)+ '_GLCM_' + str(me)+'_asym_wsize=' +str(sizes[6])] = results1[6] new_dfg1[str(log1)+ '_GLCM_' + str(me)+'_asym_wsize=ave'] = mean_result1 # append all rows of temporary dataframe to final dataframe glcm_asym_df = pd.concat([glcm_asym_df , new_dfg1]) glcm_asym_df.describe() """ Explanation: And now let's calculate asymmetric GLCM properties using only the upward neighbour: End of explanation """ arr_final = (np.concatenate((training_data.values, derivative_df.values, moments_df, glcm_sym_df, glcm_asym_df), axis=1)) print np.shape(arr_final) cols1 = list(training_data) + list(derivative_df) + list(moments_df) + list(glcm_sym_df) + list(glcm_asym_df) arr_final_df = pd.DataFrame(arr_final, columns=cols1) arr_final_df.describe() #arr_final_df.dtypes lll2 = list(training_data)[3:] + list(derivative_df) + list(moments_df) + list(glcm_sym_df) + list(glcm_asym_df) for l2 in lll2: arr_final_df[l2] = arr_final_df[l2].astype('float64') arr_final_df['Formation'] = arr_final_df['Formation'].astype('category') arr_final_df['Well Name'] = arr_final_df['Well Name'].astype('category') arr_final_df['NM_M'] = arr_final_df['NM_M'].astype('int64') arr_final_df.describe() # just a quick test arr_final_df['PE_GLCM_correlation_asym_wsize=33'] == arr_final_df['PE_GLCM_correlation_wsize=33'] """ Explanation: 6 - Concatenate results with input into a single numpy array, then make it into final dataframe End of explanation """ pca = decomposition.PCA() scld = arr_final_df.drop(['Well Name', 'Formation'],axis=1) scaler = preprocessing.StandardScaler().fit(scld) scld = scaler.transform(scld) pca.fit(scld) np.set_printoptions(suppress=True) # so output is not in scientific notation print np.cumsum(pca.explained_variance_ratio_)[:170] fig = plt.figure(figsize=(14,8)) plt.plot(np.arange(1, len(np.cumsum(pca.explained_variance_ratio_))+1, 1)[:170], np.cumsum(pca.explained_variance_ratio_)[:170]) plt.show """ Explanation: 7 - PCA dimensionality analysis Run PCA, and look at the significance of the components. The explained variance shows how much information (variance) can be attributed to each of the principal components, and its cumulative sum can be used to determine the number of components to select: End of explanation """ arr_final_df.to_csv('engineered_features_validation_set2.csv', sep=',', index=False) """ Explanation: It looks like from the plot above that it would take more than 100 PCs for the cumulative explained variance ratio reache 0.99. We will use another technique to reduce the number of features to go into the classification. End of explanation """
gwtsa/gwtsa
examples/notebooks/9_Response function comparison.ipynb
mit
import numpy as np import matplotlib.pyplot as plt from scipy.special import gammainc, gammaincinv from scipy.integrate import quad import pandas as pd import pastas as ps %matplotlib inline rain = ps.read.read_knmi('data_notebook_5/etmgeg_260.txt', variables='RH').series evap = ps.read.read_knmi('data_notebook_5/etmgeg_260.txt', variables='EV24').series extraction = pd.read_csv('data_notebook_5/extraction.csv', index_col=0, parse_dates=True, dayfirst=True)['Extraction'] rain = rain['1980':'1999'] evap = evap['1980':'1999'] extraction = extraction['1980':'2000'] """ Explanation: Response function comparison Developed by Stijn Klop and Mark Bakker The purpose of this notebook is to compare several of the response functions available in Pastas. The Gamma and Hantush response function are compared with the Four Parameter function. Gamma Step Response The Gamma step response function is defined as: $$ s(t) = A \dfrac{1}{\Gamma(n)} \int_{0}^{t} \tau^{n-1} \cdot e^{-\tau/a} d\tau $$ Hantush Step Response The Hantush step response function is defined as: $$ s(t) = A \dfrac{1}{\int_{0}^{\infty} \tau^{-1} \cdot e^{-\tau/a - b/\tau} d\tau} \int_{0}^{t} \tau^{-1} \cdot e^{-\tau/a - b/\tau} d\tau $$ FourParam Step Response Both these response functions are compared with the Four Parameter response function. The Four Parameter response function is defined as: $$ s(t) = A \dfrac{1}{\int_{0}^{\infty} \tau^{n-1} \cdot e^{-\tau/a - b/\tau} d\tau} \int_{0}^{t} \tau^{n-1} \cdot e^{-\tau/a - b/\tau} d\tau $$ The Hantush and Gamma response function are special cases of the Four Parameter function. In this notebook these response functions are compared. This is done using syntheticly generated groundwater observations. First the required packages are imported and the data is loaded. The rainfall and evaporation data are imported from KNMI station De Bilt using Pastas. An extraction time series is imported using the Pandas package. For this example the time series are selected from the year 1980 until 2000. End of explanation """ def gamma_tmax(A, n, a, cutoff=0.99): return gammaincinv(n, cutoff) * a def gamma_step(A, n, a, cutoff=0.99): tmax = gamma_tmax(A, n, a, cutoff) t = np.arange(0, tmax, 1) s = A * gammainc(n, t / a) return s def gamma_block(A, n, a, cutoff=0.99): # returns the gamma block response starting at t=0 with intervals of delt = 1 s = gamma_step(A, n, a, cutoff) return np.append(s[0], s[1:] - s[:-1]) def hantush_func(t, a, b): return (t ** -1) * np.exp(-(a / t) - (t / b)) def hantush_step(A, a, b, tmax=1000, cutoff=0.99): t = np.arange(0, tmax) f = np.zeros(tmax) for i in range(1,tmax): f[i] = quad(hantush_func, i-1, i, args=(a, b))[0] F = np.cumsum(f) return (A / quad(hantush_func, 0, np.inf, args=(a, b))[0]) * F def hantush_block(A, a, b, tmax=1000, cutoff=0.99): s = hantush_step(A, a, b, tmax=tmax, cutoff=cutoff) return s[1:] - s[:-1] """ Explanation: Defining required functions Several function are defined to generate the synthetic groundwater observations. In this example two groundwater series are generated, one using a Gamma response function and one using a Hantush response function. End of explanation """ Atrue = 800 ntrue = 1.1 atrue = 200 dtrue = 20 h = gamma_block(Atrue, ntrue, atrue) * 0.001 tmax = gamma_tmax(Atrue, ntrue, atrue) plt.plot(h) plt.xlabel('Time (days)') plt.ylabel('Head response (m) due to 1 mm of rain in day 1') plt.title('Gamma block response with tmax=' + str(int(tmax))); step = gamma_block(Atrue, ntrue, atrue)[1:] lenstep = len(step) h = dtrue * np.ones(len(rain) + lenstep) for i in range(len(rain)): h[i:i + lenstep] += rain[i] * step head = pd.DataFrame(index=rain.index, data=h[:len(rain)],) head = head['1990':'1999'] plt.figure(figsize=(12,5)) plt.plot(head,'k.', label='head') plt.legend(loc=0) plt.ylabel('Head (m)') plt.xlabel('Time (years)') """ Explanation: Comparing the Gamma and the Four Parameter response function The first test is to compare the Gamma with the Four Parameter response function. Using the function defined above the Gamma block response function can be generated. The parameters for the block response Atrue, ntrue and atrue are defined together with the dtrue parameter. A synthetic groundwater head series is generated using the block response function and the rainfall data series. End of explanation """ ml = ps.Model(head) sm = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec') ml.add_stressmodel(sm) ml.solve(noise=False) ml.plots.results() """ Explanation: Create Pastas model The synthetic head series is used as input for the Pastas model. A stress model is created with the rainfall data series and the Gamma response function. The stress model is added to the Pastas model, and the model is solved. End of explanation """ ml2 = ps.Model(head) sm2 = ps.StressModel(rain, ps.FourParam, name='recharge', settings='prec') ml2.add_stressmodel(sm2) ml2.solve(noise=False) ml2.plots.results() """ Explanation: The results of the Pastas simulation show that Pastas is able to simulate the synthetic groundwater head. The parameters calculated with Pastas are equal to the parameters used to generate the synthetic groundwater series; Atrue, ntrue, atrue and dtrue. Create Pastas model using the Four Parameter response function The next step is to simulate the synthetic head series using Pastas with the Four Parameter response function. A new pastas model is created using the same head series as input. A stressmodel is created with the rainfall and the Four Parameter response function. The model is solved and the results are plotted. End of explanation """ Atrue_hantush = -0.01 # Atrue is negative since a positive extraction results in a drop in groundwater head. atrue_hantush = 100 # the parameter a is equal to cS in the hantush equation. rho = 2 btrue_hantush = atrue_hantush * rho ** 2 / 4 dtrue_hantush = 20 h_hantush = hantush_block(Atrue_hantush, atrue_hantush, btrue_hantush) plt.plot(h_hantush) plt.xlabel('Time (days)') plt.ylabel('Head response (m) due to 1 m3 of extraction in day 1') plt.title('Hantush block response with tmax=' + str(1000)); step_hantush = hantush_block(Atrue_hantush, atrue_hantush, btrue_hantush)[1:] lenstep = len(step_hantush) h_hantush = dtrue * np.ones(len(extraction) + lenstep) for i in range(len(extraction)): h_hantush[i:i + lenstep] += extraction[i] * step_hantush head_hantush = pd.DataFrame(index=extraction.index, data=h_hantush[:len(extraction)],) head_hantush = head_hantush['1990':'1999'] plt.figure(figsize=(12,5)) plt.plot(head_hantush,'k.', label='head') plt.legend(loc=0) plt.ylabel('Head (m)') plt.xlabel('Time (years)') """ Explanation: The results of the Pastas simulation show that the groundwater head series can be simulated using the Four Parameter resposne function. The parameters calculated using Pastas only slightly deviate from the parameters Atrue, ntrue, atrue and dtrue defined above. The parameter recharge_b is almost equal to 0 (meaning that the Four Parameter responce function is almost equal to the Gamma response function, as can be seen above). Comparing the Hantush and Four Parameter response function In the second example of this notebook the Four Parameter response function is compared to the Hantush response function. A Hantush block response is plotted using the parameters; Atrue_hantush, atrue_hantush, btrue_hantush. The parameter btrue_hantush is calculated using rho and atrue_hantush according to Veling & Maas (2010). The Hantush block response is used together with the extraction data series to simulate the synthetic groundwater head. End of explanation """ ml3 = ps.Model(head_hantush) sm3 = ps.StressModel(extraction, ps.Hantush, name='extraction', settings='well', up=False) ml3.add_stressmodel(sm3) ml3.solve(noise=False) ml3.plots.results() """ Explanation: Create Pastas model A Pastas model is created using the head_hantush series as input. A stress model is created with the Pastas Hantush response function and the extraction as input. The stress model is added to the Pastas model and the Pastas model is solved. End of explanation """ ml4 = ps.Model(head_hantush) sm4 = ps.StressModel(extraction, ps.FourParam, name='extraction', settings='well', up=False) ml4.add_stressmodel(sm4) ml4.solve(noise=False) ml4.plots.results() """ Explanation: The results of the Pastas simulation show that the observed head can be simulated using the Hantush response function. The parameters calibrated with Pastas are very close to the true parameters. Create Pastas model using the Four Parameter response function A new Pastas model is created. A stress model is created using the extraction data series and the Four Parameter function, FourParam, as input. The stress model is added to the Pastas model an the model is solved. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/cas/cmip6/models/sandbox-3/atmos.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cas', 'sandbox-3', 'atmos') """ Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: CAS Source ID: SANDBOX-3 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:45 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) """ Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) """ Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) """ Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) """ Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) """ Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation """
netodeolino/TCC
TCC 02/Resultados/Janeiro/Janeiro.ipynb
mit
all_crime_tipos.head(10) all_crime_tipos_top10 = all_crime_tipos.head(10) all_crime_tipos_top10.plot(kind='barh', figsize=(12,6), color='#3f3fff') plt.title('Top 10 crimes por tipo (Jan 2017)') plt.xlabel('Número de crimes') plt.ylabel('Crime') plt.tight_layout() ax = plt.gca() ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}')) plt.show() """ Explanation: Filtro dos 10 crimes com mais ocorrências em janeiro End of explanation """ all_crime_tipos """ Explanation: Todas as ocorrências criminais de janeiro End of explanation """ group_df_janeiro = df_janeiro.groupby('CLUSTER') crimes = group_df_janeiro['NATUREZA DA OCORRÊNCIA'].count() crimes.plot(kind='barh', figsize=(10,7), color='#3f3fff') plt.title('Número de crimes por região (Jan 2017)') plt.xlabel('Número') plt.ylabel('Região') plt.tight_layout() ax = plt.gca() ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}')) plt.show() """ Explanation: Quantidade de crimes por região End of explanation """ regioes = df_janeiro.groupby('CLUSTER').count() grupo_de_regioes = regioes.sort_values('NATUREZA DA OCORRÊNCIA', ascending=False) grupo_de_regioes['TOTAL'] = grupo_de_regioes.ID top_5_regioes_qtd = grupo_de_regioes.TOTAL.head(6) top_5_regioes_qtd.plot(kind='barh', figsize=(10,4), color='#3f3fff') plt.title('Top 5 regiões com mais crimes') plt.xlabel('Número de crimes') plt.ylabel('Região') plt.tight_layout() ax = plt.gca() ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}')) plt.show() """ Explanation: As 5 regiões com mais ocorrências End of explanation """ regiao_4_detalhe = df_janeiro[df_janeiro['CLUSTER'] == 4] regiao_4_detalhe """ Explanation: Acima podemos ver que a região 4 teve o maior número de ocorrências criminais Podemos agora ver quais são essas ocorrências de forma mais detalhada End of explanation """ crime_types = regiao_4_detalhe[['NATUREZA DA OCORRÊNCIA']] crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size() crime_type_counts = regiao_4_detalhe[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum() crime_type_counts['TOTAL'] = crime_type_total all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False) crimes_top_5 = all_crime_types.head(5) crimes_top_5.plot(kind='barh', figsize=(11,3), color='#3f3fff') plt.title('Top 5 crimes na região 4') plt.xlabel('Número de crimes') plt.ylabel('Crime') plt.tight_layout() ax = plt.gca() ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}')) plt.show() """ Explanation: Uma análise sobre as 5 ocorrências mais comuns End of explanation """ horas_mes = df_janeiro.HORA.value_counts() horas_mes_top10 = horas_mes.head(10) horas_mes_top10.plot(kind='barh', figsize=(11,4), color='#3f3fff') plt.title('Crimes por hora (Jan 2017)') plt.xlabel('Número de ocorrências') plt.ylabel('Hora do dia') plt.tight_layout() ax = plt.gca() ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}')) plt.show() """ Explanation: Filtro dos 10 horários com mais ocorrências em janeiro End of explanation """ crime_hours = regiao_4_detalhe[['HORA']] crime_hours_total = crime_hours.groupby('HORA').size() crime_hours_counts = regiao_4_detalhe[['HORA']].groupby('HORA').sum() crime_hours_counts['TOTAL'] = crime_hours_total all_hours_types = crime_hours_counts.sort_values(by='TOTAL', ascending=False) all_hours_types.head(5) all_hours_types_top5 = all_hours_types.head(5) all_hours_types_top5.plot(kind='barh', figsize=(11,3), color='#3f3fff') plt.title('Top 5 crimes por hora na região 4') plt.xlabel('Número de ocorrências') plt.ylabel('Hora do dia') plt.tight_layout() ax = plt.gca() ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}')) plt.show() """ Explanation: Filtro dos 5 horários com mais ocorrências na região 4 (região com mais ocorrências em janeiro) End of explanation """ crimes_mes = df_janeiro.BAIRRO.value_counts() crimes_mes_top10 = crimes_mes.head(10) crimes_mes_top10.plot(kind='barh', figsize=(11,4), color='#3f3fff') plt.title('Top 10 Bairros com mais crimes (Jan 2017)') plt.xlabel('Número de ocorrências') plt.ylabel('Bairro') plt.tight_layout() ax = plt.gca() ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}')) plt.show() """ Explanation: Filtro dos 10 bairros com mais ocorrências em janeiro End of explanation """ barra_do_ceara = df_janeiro[df_janeiro['BAIRRO'] == 'BARRA DO CEARA'] crime_types = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']] crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size() crime_type_counts = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum() crime_type_counts['TOTAL'] = crime_type_total all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False) all_crime_tipos_5 = all_crime_types.head(5) all_crime_tipos_5.plot(kind='barh', figsize=(15,4), color='#3f3fff') plt.title('Top 5 crimes na Barra do Ceará') plt.xlabel('Número de Crimes') plt.ylabel('Crime') plt.tight_layout() ax = plt.gca() ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}')) plt.show() """ Explanation: O Bairro com o maior número de ocorrências em janeiro foi o Barra do Ceará Vamos agora ver de forma mais detalhadas quais foram estes crimes End of explanation """ crime_types_bairro = regiao_4_detalhe[['BAIRRO']] crime_type_total_bairro = crime_types_bairro.groupby('BAIRRO').size() crime_type_counts_bairro = regiao_4_detalhe[['BAIRRO']].groupby('BAIRRO').sum() crime_type_counts_bairro['TOTAL'] = crime_type_total_bairro all_crime_types_bairro = crime_type_counts_bairro.sort_values(by='TOTAL', ascending=False) crimes_top_5_bairro = all_crime_types_bairro.head(5) crimes_top_5_bairro.plot(kind='barh', figsize=(11,3), color='#3f3fff') plt.title('Top 5 bairros na região 4') plt.xlabel('Quantidade') plt.ylabel('Bairro') plt.tight_layout() ax = plt.gca() ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}')) plt.show() """ Explanation: Os 5 bairros mais comuns na região 4 End of explanation """ henrique_jorge = df_janeiro[df_janeiro['BAIRRO'] == 'HENRIQUE JORGE'] crime_types = henrique_jorge[['NATUREZA DA OCORRÊNCIA']] crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size() crime_type_counts = henrique_jorge[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum() crime_type_counts['TOTAL'] = crime_type_total all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False) all_crime_tipos_5 = all_crime_types.head(5) all_crime_tipos_5.plot(kind='barh', figsize=(15,4), color='#3f3fff') plt.title('Top 5 crimes no Henrique Jorge') plt.xlabel('Número de Crimes') plt.ylabel('Crime') plt.tight_layout() ax = plt.gca() ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}')) plt.show() """ Explanation: Análise sobre o bairo Henrique Jorge End of explanation """
phoebe-project/phoebe2-docs
2.1/tutorials/requiv_crit_semidetached.ipynb
gpl-3.0
!pip install -I "phoebe>=2.1,<2.2" """ Explanation: Critical Radii: Semidetached Systems Setup Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation """ %matplotlib inline import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() """ Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details. End of explanation """ b.add_constraint('semidetached', 'primary') """ Explanation: Semi-Detached Systems Semi-detached systems are implemented by constraining the value of requiv to be the same as requiv_max by appyling the 'semidetached' constraint on the 'primary' component. End of explanation """ b['requiv@constraint@primary'] """ Explanation: We can view the constraint on requiv by accessing the constraint: End of explanation """ b['requiv_max@constraint@primary'] """ Explanation: Now whenever any of the relevant parameters (q, ecc, syncpar, sma) are changed, the value of requiv will change to match the critical value as defined by requiv_max. End of explanation """
laurentperrinet/Khoei_2017_PLoSCB
notebooks/SI_controls.ipynb
mit
%%writefile experiment_SI_controls.py """ A bunch of control runs """ import MotionParticlesFLE as mp gen_dot = mp.generate_dot import numpy as np import os from default_param import * image = {} experiment = 'SI' N_scan = 5 base = 10. #mp.N_trials = 4 for stimulus_tag, im_arg in zip(stim_labels, stim_args): #for stimulus_tag, im_arg in zip(stim_labels[1], stim_args[1]): #for D_x, D_V, label in zip([mp.D_x, PBP_D_x], [mp.D_V, PBP_D_V], ['MBP', 'PBP']): for D_x, D_V, label in zip([mp.D_x], [mp.D_V], ['MBP']): im_arg.update(D_V=D_V, D_x=D_x) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, D_x=im_arg['D_x']*np.logspace(-2, 2, N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, D_V=im_arg['D_V']*np.logspace(-2, 2, N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, sigma_motion=mp.sigma_motion*np.logspace(-1., 1., N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, K_motion=mp.K_motion*np.logspace(-1., 1., N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, dot_size=im_arg['dot_size']*np.logspace(-1., 1., N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, sigma_I=mp.sigma_I*np.logspace(-1, 1, N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, im_noise=mp.im_noise*np.logspace(-1, 1, N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, sigma_noise=mp.sigma_noise*np.logspace(-1, 1, N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, p_epsilon=mp.p_epsilon*np.logspace(-1, 1, N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, v_init=mp.v_init*np.logspace(-1., 1., N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, v_prior=np.logspace(-.3, 5., N_scan, base=base)) _ = mp.figure_image_variable( os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y, fixed_args=im_arg, resample=np.linspace(0.1, 1., N_scan, endpoint=True)) %run experiment_SI_controls.py """ Explanation: FLE In this script the CONDENSATION is done for rightward and leftward motion of a dot stimulus, at different levels of noise. also for flashing stimuli needed for simulation of flash initiated and flash_terminated FLEs. The aim is to generate generate (Berry et al 99)'s figure 2: shifting RF position in the direction of motion. Initialization of notebook End of explanation """ !git commit -m' SI controls ' ../notebooks/SI_controls* ../scripts/experiment_SI_controls* """ Explanation: TODO : show results with a widget End of explanation """
GoogleCloudPlatform/training-data-analyst
blogs/sme_academy/tfx/03_train.ipynb
apache-2.0
import tensorflow as tf import tensorflow_data_validation as tfdv import tensorflow_transform as tft print('TF version: {}'.format(tf.__version__)) print('TFT version: {}'.format(tft.__version__)) print('TFDV version: {}'.format(tfdv.__version__)) PROJECT = 'cloud-training-demos' # Replace with your PROJECT BUCKET = 'cloud-training-demos-ml' # Replace with your BUCKET REGION = 'us-central1' # Choose an available region for Cloud MLE import os os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION ## ensure we predict locally with our current Python environment gcloud config set ml_engine/local_python `which python` """ Explanation: ML with TensorFlow Extended (TFX) -- Part 3 The puprpose of this tutorial is to show how to do end-to-end ML with TFX libraries on Google Cloud Platform. This tutorial covers: 1. Data analysis and schema generation with TF Data Validation. 2. Data preprocessing with TF Transform. 3. Model training with TF Estimator. 4. Model evaluation with TF Model Analysis. This notebook has been tested in Jupyter on the Deep Learning VM. Setup Cloud environment End of explanation """ DATA_DIR='gs://cloud-samples-data/ml-engine/census/data' import os TRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv') EVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv') !gsutil ls -l $TRAIN_DATA_FILE !gsutil ls -l $EVAL_DATA_FILE HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num', 'marital_status', 'occupation', 'relationship', 'race', 'gender', 'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'income_bracket'] TARGET_FEATURE_NAME = 'income_bracket' TARGET_LABELS = [' <=50K', ' >50K'] WEIGHT_COLUMN_NAME = 'fnlwgt_scaled' # note that you changes the column name in tft RAW_SCHEMA_LOCATION = 'raw_schema.pbtxt' """ Explanation: <img valign="middle" src="images/tfx.jpeg"> UCI Adult Dataset: https://archive.ics.uci.edu/ml/datasets/adult Predict whether income exceeds $50K/yr based on census data. Also known as "Census Income" dataset. End of explanation """ PREPROC_OUTPUT_DIR = 'gs://{}/census/tfx'.format(BUCKET) # from 02_transform.ipynb TRANSFORM_ARTIFACTS_DIR = os.path.join(PREPROC_OUTPUT_DIR,'transform') TRANSFORMED_DATA_DIR = os.path.join(PREPROC_OUTPUT_DIR,'transformed') !gsutil ls $TRANSFORM_ARTIFACTS_DIR !gsutil ls $TRANSFORMED_DATA_DIR transform_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR) """ Explanation: 3. Model Training For training the model, we use TF Estimators APIs to train a premade DNNClassifier. We perform the following: 1. Load the transform schema 2. Use the transform schema to parse TFRecords in input_fn 3. Use the transform schema to create feature columns 4. Create a premade DNNClassifier 5. Train the model 6. Implement the serving_input_fn and apply the transform logic 7. Export and test the saved model. 3.1 Load transform output End of explanation """ def make_input_fn(tfrecords_files, batch_size, num_epochs=1, shuffle=False): def input_fn(): dataset = tf.data.experimental.make_batched_features_dataset( file_pattern=tfrecords_files, batch_size=batch_size, features=transform_output.transformed_feature_spec(), label_key=TARGET_FEATURE_NAME, reader=tf.data.TFRecordDataset, num_epochs=num_epochs, shuffle=shuffle ) return dataset return input_fn make_input_fn(TRANSFORMED_DATA_DIR+'/train*.tfrecords', 2, shuffle=False)() """ Explanation: 3.2 TFRecords Input Function End of explanation """ import math def create_feature_columns(): feature_columns = [] transformed_features = transform_output.transformed_metadata.schema._schema_proto.feature for feature in transformed_features: if feature.name in [TARGET_FEATURE_NAME, WEIGHT_COLUMN_NAME]: continue if hasattr(feature, 'int_domain') and feature.int_domain.is_categorical: vocab_size = feature.int_domain.max + 1 feature_columns.append( tf.feature_column.embedding_column( tf.feature_column.categorical_column_with_identity( feature.name, num_buckets=vocab_size), dimension = int(math.sqrt(vocab_size)))) else: feature_columns.append( tf.feature_column.numeric_column(feature.name)) return feature_columns create_feature_columns() """ Explanation: 3.3 Create feature columns End of explanation """ def create_estimator(params, run_config): feature_columns = create_feature_columns() estimator = tf.estimator.DNNClassifier( weight_column=WEIGHT_COLUMN_NAME, label_vocabulary=TARGET_LABELS, feature_columns=feature_columns, hidden_units=params.hidden_units, config=run_config ) return estimator """ Explanation: 3.4 Instantiate and Estimator End of explanation """ from datetime import datetime def run_experiment(estimator, params, run_config, resume=False): tf.logging.set_verbosity(tf.logging.INFO) if not resume: if tf.gfile.Exists(run_config.model_dir): print("Removing previous artifacts...") tf.gfile.DeleteRecursively(run_config.model_dir) else: print("Resuming training...") train_spec = tf.estimator.TrainSpec( input_fn = make_input_fn( TRANSFORMED_DATA_DIR+'/train*.tfrecords', batch_size=params.batch_size, num_epochs=None, shuffle=True ), max_steps=params.max_steps ) eval_spec = tf.estimator.EvalSpec( input_fn = make_input_fn( TRANSFORMED_DATA_DIR+'/eval*.tfrecords', batch_size=params.batch_size, ), start_delay_secs=0, throttle_secs=0, steps=None ) time_start = datetime.utcnow() print("Experiment started at {}".format(time_start.strftime("%H:%M:%S"))) print(".......................................") tf.estimator.train_and_evaluate( estimator=estimator, train_spec=train_spec, eval_spec=eval_spec) time_end = datetime.utcnow() print(".......................................") print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S"))) print("") time_elapsed = time_end - time_start print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds())) """ Explanation: 3.5 Implement train and evaluate experiment End of explanation """ MODELS_LOCATION = 'models/census' MODEL_NAME = 'dnn_classifier' model_dir = os.path.join(MODELS_LOCATION, MODEL_NAME) os.environ['MODEL_DIR'] = model_dir params = tf.contrib.training.HParams() params.hidden_units = [128, 64] params.dropout = 0.15 params.batch_size = 128 params.max_steps = 1000 run_config = tf.estimator.RunConfig( tf_random_seed=19831006, save_checkpoints_steps=200, keep_checkpoint_max=3, model_dir=model_dir, log_step_count_steps=10 ) estimator = create_estimator(params, run_config) run_experiment(estimator, params, run_config) """ Explanation: 3.5 Run experiment End of explanation """ tf.logging.set_verbosity(tf.logging.ERROR) def make_serving_input_receiver_fn(): from tensorflow_transform.tf_metadata import schema_utils source_raw_schema = tfdv.load_schema_text(RAW_SCHEMA_LOCATION) raw_feature_spec = schema_utils.schema_as_feature_spec(source_raw_schema).feature_spec raw_feature_spec.pop(TARGET_FEATURE_NAME) if WEIGHT_COLUMN_NAME in raw_feature_spec: raw_feature_spec.pop(WEIGHT_COLUMN_NAME) # Create the interface for the serving function with the raw features raw_features = tf.estimator.export.build_parsing_serving_input_receiver_fn(raw_feature_spec)().features receiver_tensors = {feature: tf.placeholder(shape=[None], dtype=raw_features[feature].dtype) for feature in raw_features } receiver_tensors_expanded = {tensor: tf.reshape(receiver_tensors[tensor], (-1, 1)) for tensor in receiver_tensors } # Apply the transform function transformed_features = transform_output.transform_raw_features(receiver_tensors_expanded) return tf.estimator.export.ServingInputReceiver( transformed_features, receiver_tensors) export_dir = os.path.join(model_dir, 'export') if tf.gfile.Exists(export_dir): tf.gfile.DeleteRecursively(export_dir) estimator.export_savedmodel( export_dir_base=export_dir, serving_input_receiver_fn=make_serving_input_receiver_fn ) %%bash saved_models_base=${MODEL_DIR}/export/ saved_model_dir=${MODEL_DIR}/export/$(ls ${saved_models_base} | tail -n 1) echo ${saved_model_dir} saved_model_cli show --dir=${saved_model_dir} --all """ Explanation: 3.6 Export the model for serving End of explanation """ export_dir = os.path.join(model_dir, 'export') tf.gfile.ListDirectory(export_dir)[-1] saved_model_dir = os.path.join(export_dir, tf.gfile.ListDirectory(export_dir)[-1]) print(saved_model_dir) print() predictor_fn = tf.contrib.predictor.from_saved_model( export_dir = saved_model_dir, signature_def_key="predict" ) input = { 'age': [34.0], 'workclass': ['Private'], 'education': ['Doctorate'], 'education_num': [10.0], 'marital_status': ['Married-civ-spouse'], 'occupation': ['Prof-specialty'], 'relationship': ['Husband'], 'race': ['White'], 'gender': ['Male'], 'capital_gain': [0.0], 'capital_loss': [0.0], 'hours_per_week': [40.0], 'native_country':['Mexico'] } print(input) print() output = predictor_fn(input) print(output) """ Explanation: 3.7 Try out saved model End of explanation """ #%%bash #MODEL_NAME="census" #MODEL_VERSION="v1" #MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/census/dnn_classifier/export/exporter | tail -1) #gcloud ml-engine models create ${MODEL_NAME} --regions $REGION #gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version 1.13 """ Explanation: 3.8 Deploy model to Cloud ML Engine End of explanation """ HEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''], [0], [0], [0], [''], ['']] def make_eval_input_receiver_fn(): receiver_tensors = {'examples': tf.placeholder(dtype=tf.string, shape=[None])} columns = tf.decode_csv(receiver_tensors['examples'], record_defaults=HEADER_DEFAULTS) features = dict(zip(HEADER, columns)) print(features) for feature_name in features: if features[feature_name].dtype == tf.int32: features[feature_name] = tf.cast(features[feature_name], tf.int64) features[feature_name] = tf.reshape(features[feature_name], (-1, 1)) transformed_features = transform_output.transform_raw_features(features) features.update(transformed_features) return tfma.export.EvalInputReceiver( features=features, receiver_tensors=receiver_tensors, labels=features[TARGET_FEATURE_NAME] ) import tensorflow_model_analysis as tfma eval_model_dir = os.path.join(model_dir, "export/evaluate") if tf.gfile.Exists(eval_model_dir): tf.gfile.DeleteRecursively(eval_model_dir) tfma.export.export_eval_savedmodel( estimator=estimator, export_dir_base=eval_model_dir, eval_input_receiver_fn=make_eval_input_receiver_fn ) """ Explanation: 3.9 Export evaluation saved model End of explanation """
amcdawes/QMlabs
Lab 8 - Two-particle systems.ipynb
mit
import matplotlib.pyplot as plt from numpy import sqrt,pi,sin,cos,arange from qutip import * """ Explanation: Two-particle systems An introduction to multi-particle spaces, starting with photon polarization states. This lab answers the question: How do we describe the state of two photons? End of explanation """ H = basis(2,0) V = basis(2,1) P45 = 1/sqrt(2)*(H+V) M45 = 1/sqrt(2)*(H-V) L = 1/sqrt(2)*(H+1j*V) R = 1/sqrt(2)*(H-1j*V) """ Explanation: The polarization states (in the HV-basis): End of explanation """ A = Qobj([[1],[2]]) B = Qobj([[3],[4]]) print(A) print(B) print(tensor(A,B)) """ Explanation: Define two-particle states using the tensor() function: Mathematically, we are taking the tensor product of two vectors. That product is a larger vector with twice as many entries as the individual state vectors. As long as we take the tensor products in the right order (i.e. always talking about photon 1 and photon 2 in that order) we can also make operators that act on two-photon states). In order to keep a consistent naming scheme, we'll call the first photon the signal photon and the second photon the idler photon. The names aren't particularly important but they come from the process we use in the lab: Spontaneous Parametric Down Conversion First, look at a generic pair of vectors and their tensor product: End of explanation """ C = Qobj([[1],[2],[3]]) D = Qobj([[4],[5],[6]]) print(tensor(C,D)) """ Explanation: So we see that the tensor product has the following elements: 1*3 = 3, 1*4 = 4, 2*3 = 6, 2*4 = 8. Essentially, we distributed the multiplication of the first vector through the second vector. Using the technical terms of vector spaces, the tensor product exists in a larger Hilbert space (the number of dimensions is the product of the dimensions of the original states). See this with larger initial states: two 3-dim vectors have a tensor product in 9-dim space: End of explanation """ HH = tensor(H,H) HV = tensor(H,V) VH = tensor(V,H) VV = tensor(V,V) # How do we represent HH? It is a vector with four elements. HH """ Explanation: Now, back to the quantum mechanics. Form the four different combinations of two photons: End of explanation """ Phv = H*H.dag() - V*V.dag() Phv """ Explanation: So we interpret the state $|HH\rangle$ as the vector (1,0,0,0) in a four-dimensional space. Recall: The polarization measurement operator (for one photon): End of explanation """ qeye(2) # 2-dimensional identity """ Explanation: Also, the identity is defined as qeye(n) for n dimensions in qutip: End of explanation """ Phv_s = tensor(Phv,qeye(2)) Phv_s """ Explanation: The two-photon operator, measuring the signal photon, is formed with the tensor() function. It is the tensor product of the projection operator Phv and the 2-dimensional identity operator qeye(2). The trick is putting them in the correct order. The first element in the tensor product acts on the signal photon, the second acts on the idler photon. So to act on only the signal photon, we create a tensor product with the projection operator first, and the identity second: End of explanation """ Phv_i = tensor(qeye(2),Phv) Phv_i """ Explanation: It can be hard to interpret these values visually but remember it was constructed by multiplying all the terms between two matrices with only diagonal elements. It makes sense that the result is also diagonal. Also, the sign of the diagonal depends on the state of the signal photon (the first one listed). Recall the states are in the order: HH, HV, VH, VV so the first two states have H signal photons and are therefore 1, and the second two states are V signal photons so -1 for those diagonals. Now construct the two-photon operator that measures the idler photon: End of explanation """ Ph = H*H.dag() Ph_i = tensor(qeye(2),Ph) # Ph for idler photon """ Explanation: Next, construct a projection operator that projects the idler photon to H: End of explanation """ Ph_s = tensor(Ph,qeye(2)) # Ph for signal photon """ Explanation: And the same but for the signal photon: End of explanation """ HH.dag()*Ph_i*HH """ Explanation: You start to see the pattern. Build these up from our earlier operators, just apply them to the specific particle by including them in the tensor product at that position. Next we will do some example calculations. Example: find the probability of measuring a horizontal idler photon if the system is prepared in the state $|HH\rangle$ End of explanation """ psi = tensor(H,P45) # the prepared state psi.dag()*Ph_i*psi """ Explanation: Example: find the probability of measuring a horizontal idler photon in the state $|\psi\rangle = |H,+45\rangle$ End of explanation """ # First, form the prepared state: psi = tensor(R,P45) # Then create the projection operator for the state we are asking about: projection = VH*VH.dag() # Finally, calculate the probability by computing the bra-ket: psi.dag()*projection*psi """ Explanation: Example 8.2 prob. of measuring vertical signal and horizontal idler if $|\psi\rangle = |R,+45\rangle$ End of explanation """ phiPlus = 1/sqrt(2)*(HH + VV) phiPlus.dag()*Ph_i*phiPlus # probability of measuring a horizontal idler photon: """ Explanation: Entangled states: A very interesting system can be set up where there are paired photons being created with unknown but correlated polarization. In this case, we can say the state is in a combination of $|HH\rangle$ and $|VV\rangle$. If either two-photon state is allowed, then the normalized state is $$\big|\phi^+\big\rangle = \frac{1}{\sqrt{2}}\big( \big|HH\big\rangle + \big|VV\big\rangle \big)$$ End of explanation """ phiPlus.dag()*Ph_s*phiPlus # probability of measuring a horizontal signal photon """ Explanation: This is expected, because the HH state has 50% of the probability amplitude. Same for a horizontal signal photon: End of explanation """ # Projection operator for H idler and H signal: phh = HH*HH.dag() phiPlus.dag()*phh*phiPlus # Projection operator for H idler Pih = tensor(qeye(2),H*H.dag()) phiPlus.dag()*Pih*phiPlus """ Explanation: Now, find $P(H_s|H_i)$ (Example 8.5) End of explanation """ 0.5/0.5 """ Explanation: $P(H_s|H_i) = \frac{P(H_s,H_i)}{P(H_i)}$ End of explanation """ # Solution # Probability that signal is +45 and idler +45 Pp45p45 = tensor(P45,P45) * tensor(P45,P45).dag() phiPlus.dag()*Pp45p45*phiPlus # Solution # Probability that the idler is +45 regardless of the signal Pp45i = tensor(qeye(2),P45) * tensor(qeye(2),P45).dag() phiPlus.dag()*Pp45i*phiPlus """ Explanation: Guaranteed to measure a horizontal signal photon whenever a horizontal idler photon is measured. What about vertical? Find the conditional probability of measuring a vertical signal photon if the idler photon is found to be vertical: Now, measure a different basis (use the +45 states) to show that the photons are always found in the same polarization even when measured at a different angle: End of explanation """ # Solution # Probability that they are in different 45 states: Pp45m45 = tensor(P45,M45) * tensor(P45,M45).dag() phiPlus.dag()*Pp45m45*phiPlus """ Explanation: Finally, to really drive this odd point home, show that they are never found in the $\big|+45,-45\big\rangle$ state: End of explanation """
FISHunderscore/Pendulum-Wave
Pendulum-Wave.ipynb
gpl-3.0
import math from math import pi lengthsM = [] danceDuration = 60 mostOscils = 51 # Where n is the index of the pendulum (starting at 0) length = lambda n: 9.81 * (danceDuration / (2*pi*(mostOscils + n)))**2 for n in range(12): # 12 pendulums, indexed 0-11 lengthsM.append(length(n)) # Convert lengths to inches for ease lengthsIn = [] for length in lengthsM: lengthsIn.append(length*39.3701) lengthsMRounded = [] for length in lengthsM: lengthsMRounded.append(round(length, 3)) lengthsInRounded = [] for length in lengthsIn: lengthsInRounded.append(round(length, 3)) print("Lengths in meters: {}".format(lengthsMRounded)) print() print("Lengths in inches: {}".format(lengthsInRounded)) """ Explanation: Constructing a Pendulum Wave by Ganden Schaffner The basis of a pendulum wave is that, in a certain time (the time of the whole "dance" - let's call this $\Gamma$) the pendulum on the longest length string will oscillate $N$ times. The length of each successive shorter pendulum is adjusted such that it executes one additional oscillation in this same time $\Gamma$. $\Gamma$ will be defined as $60 seconds$, as this number is arbitrary, and 60 seconds is a good, even duration after which the dance will reset and repeat. (The dance will repeat infinitely, assuming that air resistance is negligible. In reality, all pendulums will come to a stop after some time due to air resistance, but 60 seconds should be a short enough time that air resistance will have little effect on the pendulums over this $\Gamma$.) Other pendulum waves have found $N=51$ to be a good number, used in conjunction with $\Gamma=60 seconds$. Changing $N$ would make the "dance" quicker or slower, and others have found 51 oscillations over 60 seconds (for the longest pendulum) to provide a good viewing speed, so we will use $N=51$ as well. Therefore, if 12 pendulums are used (another arbitrary number that has worked well in creating other pendulum wave machines), the 1st pendulum (longest) undergoes 51 oscillations in $\Gamma$ seconds, while the 12th pendulum (shortest) undergoes 62 oscillations (51 oscilations + 11 pendulums from the first) in this same $\Gamma=60 seconds$. Deriving an Equation for the String Length of Each Pendulum Background Information This derivation assumes that the pendulum only operates under the small angle approximation, where $ \sin(\theta) = \theta$ (in radians). This means that the pendulum should not exceed angles of about ten degrees from its equilibrium position. The position of the bob on a pendulum as a function of time, with the bob starting at its maximum displacement from the equilibrium position, is $x(t) = A\cos(2\pi f t) = A\cos(\frac{2\pi t}{T})$ The period of a pendulum can be written $T = 2\pi \sqrt{\frac{l}{g}}$ Deriving $l$ Since $T = \frac{seconds}{oscillation}$, the period of the longest pendulum is $T = \frac{\Gamma}{N}$ Because other pendulums undergo $N$ + (pendulums from the first) oscillations, their period can be written as a function of pendulums from the first: $T(n) = \frac{\Gamma}{N + n}$ Knowing that $T = 2\pi \sqrt{\frac{l}{g}}$, $T(n)$ can be substituted for $T$, yielding $\frac{\Gamma}{N + n} = 2\pi \sqrt{\frac{l(n)}{g}}$ $l$ can then be solved for: * $\frac{\Gamma}{2\pi(N + n)} = \sqrt{\frac{l(n)}{g}}$ * $(\frac{\Gamma}{2\pi(N + n)})^2 = \frac{l(n)}{g}$ * $l(n) = g(\frac{\Gamma}{2\pi(N + n)})^2$ For our pendulum wave, consisting of twelve pendulums, Python 3 code was written to solve for the required $l$ values (with $n$ ranging from 0 to 11 for the longest to shortest pendulums, respectively. This code and its output can be seen below: End of explanation """ import numpy as np import matplotlib.pyplot as plt from matplotlib import animation %matplotlib inline location = np.linspace(0, 0.056*11, 12) funcTheta = lambda theta_i, n, t: theta_i * math.cos((2 * pi * t) / (60 / (51 + n))) funcDispX = lambda l, theta: l * math.sin(theta) funcDispY = lambda l, theta: -1 * l * math.cos(theta) # Set up the figure, the axis, and the plot we want to animate fig, ax = plt.subplots() ax.set_xlim([0, location[11]]) ax.set_ylim([-0.06, 0.06]) line, = ax.plot([], [], linestyle='--', marker='o') plt.xlabel("Position along the machine, meters") plt.ylabel("Horizonal displacement from equilibrium, meters") fig.suptitle("Top View at t = 0 seconds", fontsize=12) ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') ax.xaxis.labelpad = 100 def init1(): line.set_data([], []) return (line,) def animate1(t): displacementArr = [] for n in range(12): displacementArr.append(funcDispX(lengthsM[n], funcTheta(math.radians(10), n, t))) fig.suptitle("Top View at t = {:.3f} seconds".format(round(t, 3)), fontsize=12) line.set_data(location, np.array(displacementArr)) return (line,) fps = 60 anim = animation.FuncAnimation(fig, animate1, init_func=init1, frames=np.linspace(0, 60, 60*fps//2), blit=True) anim.save('animTopDown.mp4', writer='ffmpeg', fps=str(fps), extra_args=['-vcodec', 'libx264']) plt.close() """ Explanation: From this point, the pendulum was built. Note that the lengths listed above are from the pivot point of the pendulums to the centers of mass of the hanging golf balls, so the actual strings will be shorter than the numbers given above. The pendulums were spaced so that there was 0.5in between the golf balls, meaning that strings were hung about 2.2in, or 5.6cm, apart. Viewing the Device from Above, Using Python Below, a visualization of the device can be seen (from above). This simulates the position the center of mass of each golf ball, as if the pendulum was started with the balls all 10º from their equilibrium position. A dashed line has been drawn in to show the approximate pattern of the pendulums. End of explanation """ # Set up the figure, the axis, and the plot we want to animate fig, ax = plt.subplots() ax.set_xlim([-0.15, 0.15]) ax.set_ylim([-0.4, 0]) line, = ax.plot([], [], linestyle='', marker='o') plt.xlabel("Horizonal displacement from equilibrium, meters") plt.ylabel("Vertical displacement from equilibrium, meters") fig.suptitle("Top View at t = 0 seconds", fontsize=12) ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') ax.xaxis.labelpad = 220 ax.yaxis.labelpad = 150 def init2(): line.set_data([], []) return (line,) def animate2(t): xPos = [] yPos = [] for n in range(12): xPos.append(funcDispX(lengthsM[n], funcTheta(math.radians(10), n, t))) yPos.append(funcDispY(lengthsM[n], funcTheta(math.radians(10), n, t))) fig.suptitle("Head-On View at t = {:.3f} seconds".format(round(t, 3)), fontsize=12) line.set_data(np.array(xPos), np.array(yPos)) return (line,) fps = 60 anim = animation.FuncAnimation(fig, animate2, init_func=init2, frames=np.linspace(0, 60, 60*fps//2), blit=True) anim.save('animHeadOn.mp4', writer='ffmpeg', fps=str(fps), extra_args=['-vcodec', 'libx264']) plt.close() """ Explanation: <img src="animTopDown.gif"> Visualizing the Device from Head-On, Using Python Below, a visualization of the device can be seen (from head-on). This simulates the position the center of mass of each golf ball, as if the pendulum was started with the balls all 10º from their equilibrium position. End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_cluster_stats_time_frequency.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne import io from mne.time_frequency import single_trial_power from mne.stats import permutation_cluster_test from mne.datasets import sample print(__doc__) """ Explanation: .. _tut_stats_cluster_sensor_2samp_tfr: Non-parametric between conditions cluster statistic on single trial power This script shows how to compare clusters in time-frequency power estimates between conditions. It uses a non-parametric statistical procedure based on permutations and cluster level statistics. The procedure consists in: extracting epochs for 2 conditions compute single trial power estimates baseline line correct the power estimates (power ratios) compute stats to see if the power estimates are significantly different between conditions. End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif' event_id = 1 tmin = -0.2 tmax = 0.5 # Setup for reading the raw data raw = io.Raw(raw_fname) events = mne.read_events(event_fname) include = [] raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more # picks MEG gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False, include=include, exclude='bads') ch_name = raw.info['ch_names'][picks[0]] # Load condition 1 reject = dict(grad=4000e-13, eog=150e-6) event_id = 1 epochs_condition_1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject) data_condition_1 = epochs_condition_1.get_data() # as 3D matrix data_condition_1 *= 1e13 # change unit to fT / cm # Load condition 2 event_id = 2 epochs_condition_2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject) data_condition_2 = epochs_condition_2.get_data() # as 3D matrix data_condition_2 *= 1e13 # change unit to fT / cm # Take only one channel data_condition_1 = data_condition_1[:, 97:98, :] data_condition_2 = data_condition_2[:, 97:98, :] # Time vector times = 1e3 * epochs_condition_1.times # change unit to ms # Factor to downsample the temporal dimension of the PSD computed by # single_trial_power. Decimation occurs after frequency decomposition and can # be used to reduce memory usage (and possibly comptuational time of downstream # operations such as nonparametric statistics) if you don't need high # spectrotemporal resolution. decim = 2 frequencies = np.arange(7, 30, 3) # define frequencies of interest sfreq = raw.info['sfreq'] # sampling in Hz n_cycles = 1.5 epochs_power_1 = single_trial_power(data_condition_1, sfreq=sfreq, frequencies=frequencies, n_cycles=n_cycles, decim=decim) epochs_power_2 = single_trial_power(data_condition_2, sfreq=sfreq, frequencies=frequencies, n_cycles=n_cycles, decim=decim) epochs_power_1 = epochs_power_1[:, 0, :, :] # only 1 channel to get 3D matrix epochs_power_2 = epochs_power_2[:, 0, :, :] # only 1 channel to get 3D matrix # Compute ratio with baseline power (be sure to correct time vector with # decimation factor) baseline_mask = times[::decim] < 0 epochs_baseline_1 = np.mean(epochs_power_1[:, :, baseline_mask], axis=2) epochs_power_1 /= epochs_baseline_1[..., np.newaxis] epochs_baseline_2 = np.mean(epochs_power_2[:, :, baseline_mask], axis=2) epochs_power_2 /= epochs_baseline_2[..., np.newaxis] """ Explanation: Set parameters End of explanation """ threshold = 6.0 T_obs, clusters, cluster_p_values, H0 = \ permutation_cluster_test([epochs_power_1, epochs_power_2], n_permutations=100, threshold=threshold, tail=0) """ Explanation: Compute statistic End of explanation """ plt.clf() plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43) plt.subplot(2, 1, 1) evoked_contrast = np.mean(data_condition_1, 0) - np.mean(data_condition_2, 0) plt.plot(times, evoked_contrast.T) plt.title('Contrast of evoked response (%s)' % ch_name) plt.xlabel('time (ms)') plt.ylabel('Magnetic Field (fT/cm)') plt.xlim(times[0], times[-1]) plt.ylim(-100, 200) plt.subplot(2, 1, 2) # Create new stats image with only significant clusters T_obs_plot = np.nan * np.ones_like(T_obs) for c, p_val in zip(clusters, cluster_p_values): if p_val <= 0.05: T_obs_plot[c] = T_obs[c] plt.imshow(T_obs, extent=[times[0], times[-1], frequencies[0], frequencies[-1]], aspect='auto', origin='lower', cmap='RdBu_r') plt.imshow(T_obs_plot, extent=[times[0], times[-1], frequencies[0], frequencies[-1]], aspect='auto', origin='lower', cmap='RdBu_r') plt.xlabel('time (ms)') plt.ylabel('Frequency (Hz)') plt.title('Induced power (%s)' % ch_name) plt.show() """ Explanation: View time-frequency plots End of explanation """
wyndwarrior/HouseRank
DataCollection/craigslist_scraper.ipynb
mit
url_base = 'http://sfbay.craigslist.org/search/eby/apa' params = dict(search_distance=4, postal=94720) rsp = requests.get(url_base, params=params) html = bs4(rsp.text, 'html.parser') apts = html.find_all('p', attrs={'class': 'row'}) import time cl_data = [] for i in [0,100,200,300,400,500,600,700,800,900,1000,1100]: params = dict(search_distance=4, postal=94720,s=i) rsp = requests.get(url_base, params=params) html = bs4(rsp.text, 'html.parser') apts = html.find_all('p', attrs={'class': 'row'}) for apt in apts: url = "https://sfbay.craigslist.org" + apt.find('a', attrs={'class': 'hdrlnk'})['href'] try: size = apt.findAll(attrs={'class': 'housing'})[0].text except IndexError: size = "Not Listed" title = apt.find('a',attrs={'class': 'hdrlnk'}).text try: price = apt.findAll(attrs={'class': 'price'})[0].text except IndexError: price = "Not Listed" location = apt.findAll(attrs={'class': 'pnr'})[0].text #print url,size,title,price,location cl_string = url + "," + size + "," + title + "," + price + "," + location + "\n" cl_data.append(cl_string) time.sleep(5) f1=open('cl.csv', 'w+') f1.write('url,size,title,price,location\n') for data in cl_data: try: f1.write(data) except: pass f1.close() print "done" """ Explanation: Scraper 1 Fast craigslist scraper. Only gets price, size, title, city End of explanation """ import time, json cl_data = [] for i in [400,500,600,700,800,900,1000]: time.sleep(3) url_base = 'http://sfbay.craigslist.org/search/eby/apa' params = dict(search_distance=4, postal=94720,s=i) rsp = requests.get(url_base, params=params) html = bs4(rsp.text, 'html.parser') apts = html.find_all('p', attrs={'class': 'row'}) #for apt in apts: data = {} for apt in apts: time.sleep(1) url = "https://sfbay.craigslist.org" + apt.find('a', attrs={'class': 'hdrlnk'})['href'] r = urllib.urlopen(url).read() soup = bs4(r) final_dict = {} title = soup.findAll("span", {"id": "titletextonly"})[0].text try: size = soup.find("span", {"class": "housing"}).text except: size = "n/a" try: price = soup.findAll("span", {"class": "price"})[0].text except: price = "n/a" try: city = soup.findAll("small")[0].text except: city = "n/a" try: longitude = soup.findAll("div", {"class": "viewposting"})[0]['data-longitude'] latitude = soup.findAll("div", {"class": "viewposting"})[0]['data-latitude'] except: longitude = "n/a" latitude = "n/a" try: features = soup.find(id='postingbody').text except: features = "n/a" try: open_house = soup.find("span", {"class": "otherpostings"}).text except: open_house = "n/a" images = [] gmap = "n/a" for a in soup.find_all('a', href=True): if "images.craigslist.org" in a['href']: images.append(a['href']) if "maps.google.com" in a['href']: gmap = a['href'] final_dict['title'] = title final_dict['price'] = price final_dict['city'] = city final_dict['longitude'] = longitude final_dict['latitude'] = latitude final_dict['features'] = features final_dict['open_house'] = open_house final_dict['images'] = images final_dict['gmap'] = gmap final_dict['size'] = size data[url] = final_dict filename = "data" + str(i) + ".json" with open(filename, 'w') as outfile: json.dump(data, outfile) """ Explanation: Scraper 2 More thorough, grabs size, price, city, lat/long, features, open house, images End of explanation """
ruleva1983/udacity-mle
boston_housing/boston_housing.ipynb
gpl-3.0
# Import libraries necessary for this project import numpy as np import pandas as pd import visuals as vs # Supplementary code from sklearn.cross_validation import ShuffleSplit # Pretty display for notebooks %matplotlib inline # Load the Boston housing dataset data = pd.read_csv('housing.csv') prices = data['MEDV'] features = data.drop('MEDV', axis = 1) # Success print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape) print data.head() """ Explanation: Machine Learning Engineer Nanodegree Model Evaluation & Validation Project 1: Predicting Boston Housing Prices Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting Started In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis. The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset: - 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed. - 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed. - The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded. - The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation. Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported. End of explanation """ # TODO: Minimum price of the data minimum_price = np.min(prices) # TODO: Maximum price of the data maximum_price = np.max(prices) # TODO: Mean price of the data mean_price = np.mean(prices) # TODO: Median price of the data median_price = np.median(prices) # TODO: Standard deviation of prices of the data std_price = np.std(prices) # Show the calculated statistics print "Statistics for Boston housing dataset:\n" print "Minimum price: ${:,.2f}".format(minimum_price) print "Maximum price: ${:,.2f}".format(maximum_price) print "Mean price: ${:,.2f}".format(mean_price) print "Median price ${:,.2f}".format(median_price) print "Standard deviation of prices: ${:,.2f}".format(std_price) """ Explanation: Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively. Implementation: Calculate Statistics For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model. In the code cell below, you will need to implement the following: - Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices. - Store each calculation in their respective variable. End of explanation """ # TODO: Import 'r2_score' from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): """ Calculates and returns the performance score between true and predicted values based on the metric chosen. """ # TODO: Calculate the performance score between 'y_true' and 'y_predict' score = r2_score(y_true, y_predict) # Return the score return score """ Explanation: Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor). - 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood. Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each. Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7? Answer: RM feature: I expect houses with larger number of rooms to have higher value. LSTAT feature: If the percentage of owners considered lower class increases, I expect the values of the houses in that area to decrease in average. PTRATIO feature: a higher ration means that the schools are full of student in respect to teachers and maybe people may think that education level is not as high as in other areas. Therefore I expect that the price of houses would decreas as this feature increases. Developing a Model In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions. Implementation: Define a Performance Metric It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions. The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 always fails to predict the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is no better than one that naively predicts the mean of the target variable. For the performance_metric function in the code cell below, you will need to implement the following: - Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict. - Assign the performance score to the score variable. End of explanation """ # Calculate the performance of this model score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]) print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score) """ Explanation: Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable: | True Value | Prediction | | :-------------: | :--------: | | 3.0 | 2.5 | | -0.5 | 0.0 | | 2.0 | 2.1 | | 7.0 | 7.8 | | 4.2 | 5.3 | Would you consider this model to have successfully captured the variation of the target variable? Why or why not? Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination. End of explanation """ # TODO: Import 'train_test_split' from sklearn.cross_validation import train_test_split # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0) # Success print "Training and testing split was successful." """ Explanation: Answer: According to the R^2 result above, the model successfull captures the variations in the target variables. Implementation: Shuffle and Split Data Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset. For the code cell below, you will need to implement the following: - Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets. - Split the data into 80% training and 20% testing. - Set the random_state for train_test_split to a value of your choice. This ensures results are consistent. - Assign the train and testing splits to X_train, X_test, y_train, and y_test. End of explanation """ # Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices) """ Explanation: Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: The training/test split is a basic procedure for avoiding overfitting. Training is performed on the training set only, and once the model is ready, it can be tested on the test set. One looks for a similar error score for the two subset (low enough variance), and both scores to be low enough (low bias). Analyzing Model Performance In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone. Learning Curves The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination. Run the code cell below and use these graphs to answer the following question. End of explanation """ vs.ModelComplexity(X_train, y_train) """ Explanation: Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to particular scores? Answer: Chosen max_depth=1 (first graph). This is a very low complexity model. As the number of training points increases the two errors converge to a common, but not good, value. The training score curve converges from above, while the testing score curve from below. This is a classical case of bias due to low complexity of the model. Adding new training examples would not help in this case in increasing neither the training nor the testing errors. Complexity Curves The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function. Run the code cell below and use this graph to answer the following two questions. End of explanation """ # TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import make_scorer from sklearn.grid_search import GridSearchCV def fit_model(X, y): """ Performs grid search over the 'max_depth' parameter for a decision tree regressor trained on the input data [X, y]. """ # Create cross-validation sets from the training data cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0) # TODO: Create a decision tree regressor object regressor = DecisionTreeRegressor() # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10 params = {'max_depth': range(1,11)} # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' scoring_fnc = make_scorer(performance_metric) # TODO: Create the grid search object grid = GridSearchCV(regressor, params, scoring=scoring_fnc, cv=cv_sets) # Fit the grid search object to the data to compute the optimal model grid = grid.fit(X, y) # Return the optimal model after fitting the data return grid.best_estimator_ """ Explanation: Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering from high bias or high variance? Answer: - When the depth is 1 the model suffers from high bias. Justified by the fact that training and test scores are similar but low - When the depth is 10 we look at a case of high variance. Justified by the fact that training score is very high and differs significantly from testing score, a clear case of overfitting due to high variance. Question 6 - Best-Guess Optimal Model Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer? Answer: I would choose depth = 4 or 5. From this point on, the two scores start to differ, the training score keep increasing, while the testing score keep decreasing. Evaluating Model Performance In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model. Question 7 - Grid Search What is the grid search technique and how it can be applied to optimize a learning algorithm? Answer: Grid search is a technique to optimize the hyper parameters of an estimator/model. A subset of possible parameter configurations one wishes to test needs to be provided. On each of these configurations a different model is trained and evaluated using k-fold cross-validation method. At then the a hierachy of best models is obtained, and the user can proceed with the appropriate model selection (usually the best model is taken) Question 8 - Cross-Validation What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model? Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set? Answer: It consists in dividing the training set in k different and equally sized subsets. Each time one of the subset will be used entirely for testing, while a model is trained using the rest. In this way we get k different models having k different testing set errors. The average of these errors, the cross-validation error, is a measure of the overall performance of the algorithm. When running grid search together with cross-validation, for each parameter configuration we can extract the cross-validation error or score, which will give us information on which values of the parameters are best for our algorithm and problem. The advantage in respect to a simple training/test session (1 fold cross-validation) is that the algorithm will be tested independently on all parts of the training set. Implementation: Fitting a Model Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms. For the fit_model function in the code cell below, you will need to implement the following: - Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object. - Assign this object to the 'regressor' variable. - Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable. - Use make_scorer from sklearn.metrics to create a scoring function object. - Pass the performance_metric function as a parameter to the object. - Assign this scoring function to the 'scoring_fnc' variable. - Use GridSearchCV from sklearn.grid_search to create a grid search object. - Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object. - Assign the GridSearchCV object to the 'grid' variable. End of explanation """ # Fit the training data to the model using grid search reg = fit_model(X_train, y_train) # Produce the value for 'max_depth' print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']) """ Explanation: Making Predictions Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model What maximum depth does the optimal model have? How does this result compare to your guess in Question 6? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model. End of explanation """ #RM LSTAT PTRATIO # Produce a matrix for client data client_data = [[5, 17, 15], # Client 1 [4, 32, 22], # Client 2 [8, 3, 12]] # Client 3 # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price) """ Explanation: Answer: The answer given is max_depth = 4 which compatible with the results in section 6. Question 10 - Predicting Selling Prices Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients: | Feature | Client 1 | Client 2 | Client 3 | | :---: | :---: | :---: | :---: | | Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms | | Neighborhood poverty level (as %) | 17% | 32% | 3% | | Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 | What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features? Hint: Use the statistics you calculated in the Data Exploration section to help justify your response. Run the code block below to have your optimized model make predictions for each client's home. End of explanation """ vs.PredictTrials(features, prices, fit_model, client_data) """ Explanation: Answer: Until this analysis point: I would recommend to client 1 a price around 391,000 dollars I would recommend to client 2 a price around 189,000 dollars I would recommend to client 3 a price around 943,000 dollars All these prices seem reasonable. They are within the minimum and maximum prices in the training set. Also the values of the features are within the ranges of the training set features. Therefore these three new examples, given only three features, fit into the regression problem under investigation. Specifically for the third client, since it has to sell a house with many rooms, in an area with low poverty level and low student to teacher ratio, the high price of 943,000 dollars is appropriate, as one can check looking in the training set for similar instances. Sensitivity An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on. End of explanation """
nwhidden/ND101-Deep-Learning
gan_mnist/Intro_to_GANs_Exercises.ipynb
mit
%matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') """ Explanation: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN A whole list The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. End of explanation """ def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name = 'input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name = 'input_z') return inputs_real, inputs_z """ Explanation: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively. End of explanation """ def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out: ''' with tf.variable_scope('generator', reuse = reuse): # Hidden layer h1 = tf.layers.dense(z, n_units, activation = None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim) out = tf.tanh(logits) return out """ Explanation: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope. End of explanation """ def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('discriminator', reuse = reuse): # Hidden layer h1 = tf.layers.dense(x, n_units, activation = None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation = None) out = tf.sigmoid(logits) return out, logits """ Explanation: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope. End of explanation """ # Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1 """ Explanation: Hyperparameters End of explanation """ tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(real_dim = input_size, z_dim = z_size) # Generator network here g_model = generator(input_z, input_size) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real) d_model_fake, d_logits_fake = discriminator(g_model, reuse = True) """ Explanation: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise: Build the network from the functions you defined earlier. End of explanation """ # Calculate losses d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_real, labels = tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_fake, labels = tf.zeros_like(d_logits_fake))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_fake, labels = tf.ones_like(d_logits_fake))) """ Explanation: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator. End of explanation """ # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.startswith('generator')] d_vars = [var for var in t_vars if var.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list = d_vars) g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list = g_vars) """ Explanation: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately. End of explanation """ batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) """ Explanation: Training End of explanation """ %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() """ Explanation: Training loss Here we'll check out the training losses for the generator and discriminator. End of explanation """ def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) """ Explanation: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. End of explanation """ _ = view_samples(-1, samples) """ Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. End of explanation """ rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) """ Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! End of explanation """ saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha), feed_dict={input_z: sample_z}) view_samples(0, [gen_samples]) """ Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! End of explanation """
poethacker/hello
Clustering.ipynb
apache-2.0
import warnings warnings.filterwarnings("ignore") from collections import Counter import numpy as np from scipy import stats import pandas as pd import matplotlib.pyplot as plt from sklearn.cluster import KMeans from sklearn import metrics from sklearn.metrics import pairwise_distances from sklearn.cluster import AgglomerativeClustering from sklearn.cluster import DBSCAN clusdf=pd.read_csv('C:\\Users\\ajaohri\\Desktop\\ODSP\\data\\plantTraits.csv') """ Explanation: Clustering Methods Covered Here K Means, Hclus, DBSCAN, Gaussian Mixture Models, Birch, miniBatch Kmeans Mean Shift Silhouette Coefficient If the ground truth labels are not known, evaluation must be performed using the model itself. The Silhouette Coefficient (sklearn.metrics.silhouette_score) is an example of such an evaluation, where a higher Silhouette Coefficient score relates to a model with better defined clusters. The Silhouette Coefficient is defined for each sample and is composed of two scores: a: The mean distance between a sample and all other points in the same class. b: The mean distance between a sample and all other points in the next nearest cluster. The Silhouette Coefficient s for a single sample is then given as: b-a/(max(a,b) Homogeneity, completeness and V-measure the following two desirable objectives for any cluster assignment: - homogeneity: each cluster contains only members of a single class. - completeness: all members of a given class are assigned to the same cluster. those concept as scores homogeneity_score and completeness_score. Both are bounded below by 0.0 and above by 1.0 (higher is better): Their harmonic mean called V-measure is computed by v_measure_score K Means Clustering The KMeans algorithm clusters data by trying to separate samples in n groups of equal variance, minimizing a criterion known as the inertia or within-cluster sum-of-squares. This algorithm requires the number of clusters to be specified. It scales well to large number of samples and has been used across a large range of application areas in many different fields. The k-means algorithm divides a set of samples into disjoint clusters , each described by the mean of the samples in the cluster called the cluster “centroids”. The K-means algorithm aims to choose centroids that minimise the inertia, or within-cluster sum of squared criterion: Inertia, or the within-cluster sum of squares criterion, can be recognized as a measure of how internally coherent clusters are. It suffers from various drawbacks: Inertia makes the assumption that clusters are convex and isotropic, which is not always the case. It responds poorly to elongated clusters, or manifolds with irregular shapes. Inertia is not a normalized metric. But in very high-dimensional spaces, Euclidean distances tend to become inflated (this is an instance of the so-called “curse of dimensionality”). Running a dimensionality reduction algorithm such as PCA prior to k-means clustering can alleviate this problem and speed up the computations. End of explanation """ clusdf = clusdf.drop("Unnamed: 0", axis=1) clusdf.head() clusdf.info() #missing values clusdf.apply(lambda x: sum(x.isnull().values), axis = 0) clusdf.head(20) clusdf=clusdf.fillna(clusdf.mean()) """ Explanation: https://vincentarelbundock.github.io/Rdatasets/doc/cluster/plantTraits.html Usage data(plantTraits) Format A data frame with 136 observations on the following 31 variables. pdias Diaspore mass (mg) longindex Seed bank longevity durflow Flowering duration height Plant height, an ordered factor with levels 1 < 2 < ... < 8. begflow Time of first flowering, an ordered factor with levels 1 < 2 < 3 < 4 < 5 < 6 < 7 < 8 < 9 mycor Mycorrhizas, an ordered factor with levels 0never < 1 sometimes< 2always vegaer aerial vegetative propagation, an ordered factor with levels 0never < 1 present but limited< 2important. vegsout underground vegetative propagation, an ordered factor with 3 levels identical to vegaer above. autopoll selfing pollination, an ordered factor with levels 0never < 1rare < 2 often< the rule3 insects insect pollination, an ordered factor with 5 levels 0 < ... < 4. wind wind pollination, an ordered factor with 5 levels 0 < ... < 4. lign a binary factor with levels 0:1, indicating if plant is woody. piq a binary factor indicating if plant is thorny. ros a binary factor indicating if plant is rosette. semiros semi-rosette plant, a binary factor (0: no; 1: yes). leafy leafy plant, a binary factor. suman summer annual, a binary factor. winan winter annual, a binary factor. monocarp monocarpic perennial, a binary factor. polycarp polycarpic perennial, a binary factor. seasaes seasonal aestival leaves, a binary factor. seashiv seasonal hibernal leaves, a binary factor. seasver seasonal vernal leaves, a binary factor. everalw leaves always evergreen, a binary factor. everparti leaves partially evergreen, a binary factor. elaio fruits with an elaiosome (dispersed by ants), a binary factor. endozoo endozoochorous fruits, a binary factor. epizoo epizoochorous fruits, a binary factor. aquat aquatic dispersal fruits, a binary factor. windgl wind dispersed fruits, a binary factor. unsp unspecialized mechanism of seed dispersal, a binary factor. End of explanation """ from sklearn.decomposition import PCA from sklearn.preprocessing import scale clusdf_scale = scale(clusdf) n_samples, n_features = clusdf_scale.shape n_samples, n_features reduced_data = PCA(n_components=2).fit_transform(clusdf_scale) #assuming height to be Y variable to be predicted #n_digits = len(np.unique(clusdf.height)) #From R Cluster sizes: #[1] "26 29 5 32" n_digits=4 kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10) kmeans.fit(reduced_data) clusdf.head(20) # Plot the decision boundary. For that, we will assign a color to each h=0.02 x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Obtain labels for each point in mesh. Use last trained model. Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(1) plt.clf() plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) # Plot the centroids as a white X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10) plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n' 'Centroids are marked with white cross') plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.show() kmeans = KMeans(n_clusters=4, random_state=0).fit(reduced_data) kmeans.labels_ np.unique(kmeans.labels_, return_counts=True) import matplotlib.pyplot as plt %matplotlib inline plt.hist(kmeans.labels_) plt.show() kmeans.cluster_centers_ metrics.silhouette_score(reduced_data, kmeans.labels_, metric='euclidean') """ Explanation: To measure the quality of clustering results, there are two kinds of validity indices: external indices and internal indices. An external index is a measure of agreement between two partitions where the first partition is the a priori known clustering structure, and the second results from the clustering procedure (Dudoit et al., 2002). Internal indices are used to measure the goodness of a clustering structure without external information (Tseng et al., 2005 End of explanation """ clustering = AgglomerativeClustering(n_clusters=4).fit(reduced_data) clustering clustering.labels_ np.unique(clustering.labels_, return_counts=True) from scipy.cluster.hierarchy import dendrogram, linkage Z = linkage(reduced_data) dendrogram(Z) #dn1 = hierarchy.dendrogram(Z, ax=axes[0], above_threshold_color='y',orientation='top') plt.show() metrics.silhouette_score(reduced_data, clustering.labels_, metric='euclidean') """ Explanation: Given the knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred. Drawbacks Contrary to inertia, MI-based measures require the knowledge of the ground truth classes while almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting). Hierarchical clustering Hierarchical clustering is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. The AgglomerativeClustering object performs a hierarchical clustering using a bottom up approach: each observation starts in its own cluster, and clusters are successively merged together. The linkage criteria determines the metric used for the merge strategy: Ward minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters. Average linkage minimizes the average of the distances between all observations of pairs of clusters. Single linkage minimizes the distance between the closest observations of pairs of clusters. End of explanation """ db = DBSCAN().fit(reduced_data) db db.labels_ clusdf.shape reduced_data.shape reduced_data[:10,:2] for i in range(0, reduced_data.shape[0]): if db.labels_[i] == 0: c1 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='r',marker='+') elif db.labels_[i] == 1: c2 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='g',marker='o') elif db.labels_[i] == -1:c3 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='b',marker='*') plt.legend([c1, c2, c3], ['Cluster 1', 'Cluster 2','Noise']) plt.title('DBSCAN finds 2 clusters and noise') plt.show() """ Explanation: DBSCAN The DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. The central component to the DBSCAN is the concept of core samples, which are samples that are in areas of high density. A cluster is therefore a set of core samples, each close to each other (measured by some distance measure) and a set of non-core samples that are close to a core sample (but are not themselves core samples). There are two parameters to the algorithm, min_samples and eps, which define formally what we mean when we say dense. Higher min_samples or lower eps indicate higher density necessary to form a cluster. More formally, we define a core sample as being a sample in the dataset such that there exist min_samples other samples within a distance of eps, which are defined as neighbors of the core sample. This tells us that the core sample is in a dense area of the vector space. A cluster is a set of core samples that can be built by recursively taking a core sample, finding all of its neighbors that are core samples, finding all of their neighbors that are core samples, and so on. A cluster also has a set of non-core samples, which are samples that are neighbors of a core sample in the cluster but are not themselves core samples. Intuitively, these samples are on the fringes of a cluster. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns; sns.set() import numpy as np clusdf.head() reduced_data # Plot the data with K Means Labels from sklearn.cluster import KMeans kmeans = KMeans(4, random_state=0) labels = kmeans.fit(reduced_data).predict(reduced_data) plt.scatter(reduced_data[:, 0], reduced_data[:, 1], c=labels, s=40, cmap='viridis'); X=reduced_data from sklearn.cluster import KMeans from scipy.spatial.distance import cdist def plot_kmeans(kmeans, X, n_clusters=4, rseed=0, ax=None): labels = kmeans.fit_predict(X) # plot the input data ax = ax or plt.gca() ax.axis('equal') ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2) # plot the representation of the KMeans model centers = kmeans.cluster_centers_ radii = [cdist(X[labels == i], [center]).max() for i, center in enumerate(centers)] for c, r in zip(centers, radii): ax.add_patch(plt.Circle(c, r, fc='#CCCCCC', lw=3, alpha=0.5, zorder=1)) kmeans = KMeans(n_clusters=4, random_state=0) plot_kmeans(kmeans, X) rng = np.random.RandomState(13) X_stretched = np.dot(X, rng.randn(2, 2)) kmeans = KMeans(n_clusters=4, random_state=0) plot_kmeans(kmeans, X_stretched) from sklearn.mixture import GMM gmm = GMM(n_components=4).fit(X) labels = gmm.predict(X) plt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis'); probs = gmm.predict_proba(X) print(probs[:5].round(3)) size = 50 * probs.max(1) ** 2 # square emphasizes differences plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis', s=size); from matplotlib.patches import Ellipse def draw_ellipse(position, covariance, ax=None, **kwargs): """Draw an ellipse with a given position and covariance""" ax = ax or plt.gca() # Convert covariance to principal axes if covariance.shape == (2, 2): U, s, Vt = np.linalg.svd(covariance) angle = np.degrees(np.arctan2(U[1, 0], U[0, 0])) width, height = 2 * np.sqrt(s) else: angle = 0 width, height = 2 * np.sqrt(covariance) # Draw the Ellipse for nsig in range(1, 4): ax.add_patch(Ellipse(position, nsig * width, nsig * height, angle, **kwargs)) def plot_gmm(gmm, X, label=True, ax=None): ax = ax or plt.gca() labels = gmm.fit(X).predict(X) if label: ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2) else: ax.scatter(X[:, 0], X[:, 1], s=40, zorder=2) ax.axis('equal') w_factor = 0.2 / gmm.weights_.max() for pos, covar, w in zip(gmm.means_, gmm.covars_, gmm.weights_): draw_ellipse(pos, covar, alpha=w * w_factor) gmm = GMM(n_components=4, random_state=42) plot_gmm(gmm, X) gmm = GMM(n_components=4, covariance_type='full', random_state=42) plot_gmm(gmm, X_stretched) from sklearn.datasets import make_moons Xmoon, ymoon = make_moons(200, noise=.05, random_state=0) plt.scatter(Xmoon[:, 0], Xmoon[:, 1]); gmm2 = GMM(n_components=2, covariance_type='full', random_state=0) plot_gmm(gmm2, Xmoon) gmm16 = GMM(n_components=16, covariance_type='full', random_state=0) plot_gmm(gmm16, Xmoon, label=False) """ Explanation: Gaussian mixture models a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information. sklearn.mixture is a package which enables one to learn Gaussian Mixture Models (diagonal, spherical, tied and full covariance matrices supported), sample them, and estimate them from data. Facilities to help determine the appropriate number of components are also provided. A Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians. Scikit-learn implements different classes to estimate Gaussian mixture models, that correspond to different estimation strategies. cite- https://jakevdp.github.io/PythonDataScienceHandbook/05.12-gaussian-mixtures.html End of explanation """ %matplotlib inline n_components = np.arange(1, 21) models = [GMM(n, covariance_type='full', random_state=0).fit(Xmoon) for n in n_components] plt.plot(n_components, [m.bic(Xmoon) for m in models], label='BIC') plt.plot(n_components, [m.aic(Xmoon) for m in models], label='AIC') plt.legend(loc='best') plt.xlabel('n_components') plt.show() """ Explanation: mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data End of explanation """ from sklearn.cluster import Birch X = reduced_data brc = Birch(branching_factor=50, n_clusters=None, threshold=0.5,compute_labels=True) brc.fit(X) brc.predict(X) labels = brc.predict(X) plt.scatter(reduced_data[:, 0], reduced_data[:, 1], c=labels, s=40, cmap='viridis'); plt.show() """ Explanation: The optimal number of clusters is the value that minimizes the AIC or BIC, depending on which approximation we wish to use. Here it is 8. BIRCH The Birch (Balanced Iterative Reducing and Clustering using Hierarchies ) builds a tree called the Characteristic Feature Tree (CFT) for the given data. The data is essentially lossy compressed to a set of Characteristic Feature nodes (CF Nodes). The CF Nodes have a number of subclusters called Characteristic Feature subclusters (CF Subclusters) and these CF Subclusters located in the non-terminal CF Nodes can have CF Nodes as children. The CF Subclusters hold the necessary information for clustering which prevents the need to hold the entire input data in memory. This information includes: Number of samples in a subcluster. Linear Sum - A n-dimensional vector holding the sum of all samples Squared Sum - Sum of the squared L2 norm of all samples. Centroids - To avoid recalculation linear sum / n_samples. Squared norm of the centroids. It is a memory-efficient, online-learning algorithm provided as an alternative to MiniBatchKMeans. It constructs a tree data structure with the cluster centroids being read off the leaf. These can be either the final cluster centroids or can be provided as input to another clustering algorithm such as AgglomerativeClustering. End of explanation """ from sklearn.cluster import MiniBatchKMeans import numpy as np X = reduced_data # manually fit on batches kmeans = MiniBatchKMeans(n_clusters=2,random_state=0,batch_size=6) kmeans = kmeans.partial_fit(X[0:6,:]) kmeans = kmeans.partial_fit(X[6:12,:]) kmeans.cluster_centers_ kmeans.predict(X) # fit on the whole data kmeans = MiniBatchKMeans(n_clusters=4,random_state=0,batch_size=6,max_iter=10).fit(X) kmeans.cluster_centers_ kmeans.predict(X) # Plot the decision boundary. For that, we will assign a color to each h=0.02 x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Obtain labels for each point in mesh. Use last trained model. Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(1) plt.clf() plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) # Plot the centroids as a white X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10) plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n' 'Centroids are marked with white cross') plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.show() """ Explanation: # Mini Batch K-Means The MiniBatchKMeans is a variant of the KMeans algorithm which uses mini-batches to reduce the computation time, while still attempting to optimise the same objective function. Mini-batches are subsets of the input data, randomly sampled in each training iteration. These mini-batches drastically reduce the amount of computation required to converge to a local solution. In contrast to other algorithms that reduce the convergence time of k-means, mini-batch k-means produces results that are generally only slightly worse than the standard algorithm. The algorithm iterates between two major steps, similar to vanilla k-means. In the first step, samples are drawn randomly from the dataset, to form a mini-batch. These are then assigned to the nearest centroid. In the second step, the centroids are updated. In contrast to k-means, this is done on a per-sample basis. End of explanation """ print(__doc__) import numpy as np from sklearn.cluster import MeanShift, estimate_bandwidth from sklearn.datasets.samples_generator import make_blobs # ############################################################################# # Generate sample data centers = [[1, 1], [-1, -1], [1, -1]] X = reduced_data # ############################################################################# # Compute clustering with MeanShift # The following bandwidth can be automatically detected using bandwidth = estimate_bandwidth(X, quantile=0.2, n_samples=500) ms = MeanShift(bandwidth=bandwidth, bin_seeding=True) ms.fit(X) labels = ms.labels_ cluster_centers = ms.cluster_centers_ labels_unique = np.unique(labels) n_clusters_ = len(labels_unique) print("number of estimated clusters : %d" % n_clusters_) # ############################################################################# # Plot result import matplotlib.pyplot as plt from itertools import cycle plt.figure(1) plt.clf() colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk') for k, col in zip(range(n_clusters_), colors): my_members = labels == k cluster_center = cluster_centers[k] plt.plot(X[my_members, 0], X[my_members, 1], col + '.') plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=14) plt.title('Estimated number of clusters: %d' % n_clusters_) plt.show() """ Explanation: Mean Shift MeanShift clustering aims to discover blobs in a smooth density of samples. It is a centroid based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids. Mean shift clustering using a flat kernel. Mean shift clustering aims to discover “blobs” in a smooth density of samples. It is a centroid-based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids. Seeding is performed using a binning technique for scalability. End of explanation """ from sklearn import metrics from sklearn.metrics import pairwise_distances from sklearn import datasets dataset = datasets.load_iris() X = dataset.data y = dataset.target import numpy as np from sklearn.cluster import KMeans kmeans_model = KMeans(n_clusters=3, random_state=1).fit(X) labels = kmeans_model.labels_ labels_true=y labels_pred=labels from sklearn import metrics metrics.adjusted_rand_score(labels_true, labels_pred) from sklearn import metrics metrics.adjusted_mutual_info_score(labels_true, labels_pred) metrics.homogeneity_score(labels_true, labels_pred) metrics.completeness_score(labels_true, labels_pred) metrics.v_measure_score(labels_true, labels_pred) metrics.silhouette_score(X, labels, metric='euclidean') """ Explanation: knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred https://scikit-learn.org/stable/modules/clustering.html#clustering-performance-evaluation - adjusted Rand index is a function that measures the similarity of the two assignments the Mutual Information is a function that measures the agreement of the two assignments, ignoring permutations. The following two desirable objectives for any cluster assignment: - homogeneity: each cluster contains only members of a single class. - completeness: all members of a given class are assigned to the same cluster. We can turn those concept as scores.Both are bounded below by 0.0 and above by 1.0 (higher is better) homogeneity_score and completeness_score. Their harmonic mean called V-measure is computed by - v_measure_score The Silhouette Coefficient is defined for each sample and is composed of two scores: a: The mean distance between a sample and all other points in the same class. b: The mean distance between a sample and all other points in the next nearest cluster. End of explanation """
queirozfcom/python-sandbox
python3/notebooks/crimes/task.ipynb
mit
# to cluster districts by crimes, I will represent each district as a vector with the counts for each crime # that happened in there # but District is a floating point number # how many different Districts are there? districts = df.District.unique() # ok so even though it's a float it's probably a categorical column # remove NaNs because they may be noise not_nans = np.invert(np.isnan(districts)) districts = districts[not_nans] districts # we need this because some districts may not have had all types of crime # in which case the vector would not match the others # (they need to be in the same order too) all_iucrs = df["IUCR"].unique() all_iucrs.sort() all_iucrs district_vectors = [] for district_index in districts: iucrs = df[df["District"] == district_index].groupby("IUCR")["IUCR"] vector = [] for iucr_name in all_iucrs: # if there's a key error, it means this district has had no crimes of this IUCR code try: count = len(iucrs.get_group(iucr_name)) except KeyError: count = 0 vector.append(count) district_vectors.append(vector) X = np.vstack(district_vectors) X.shape # we normalize it to prevent large absolute values from affecting more sc = preprocessing.MinMaxScaler() X_scaled = sc.fit_transform(X) kmeans = KMeans(n_clusters=2, random_state=0).fit(X_scaled) kmeans.cluster_centers_ """ Explanation: other things, questions that are worth asking to get a high-level view: what types of crimes (IUCR) have had the largest (relative, or percentage) drop/rise year on year? what locations (Community.Area) have had the largest (relative, or percentage) drop/rise year on year? question 2 End of explanation """ # extract features and try out a simple model just to get some results quickly # one-hot-encode categorical features districts = pd.get_dummies(df["District"]) ## DID NOT FINISH """ Explanation: what I would do if I had more time try to project the points into a lower dimensionality to see whether they make sense and whether we can get any insight from looking at it use other clustering methods use cluster metrics (such as entropy) to see whether the clusters are good at splitting the data. question 3 general strategy for a quick model: 1) extract features, train every date becomes: number of days from today this is because the "test set" is the next 6 months (from the end of the dataset), so we don't have data in that time frame, but if we consider the number of days from today, we still get some generalization and will be able to spot trends districts are one hot encoded murders are encoded with a 1 if IUCR=='HOMICIDE', 0 otherwise 2) at inference time, generate rows with all districts and murder == 1, for the 6 months. (6 x 50 == 300 so it's not much data) then we run a simple algorithm like Logistic Regression and get the probability, for each (district, month) that murder == 1. Then we select the district with the highest probability. End of explanation """
bassio/omicexperiment
omicexperiment/docs/02_experiment_filters.ipynb
bsd-3-clause
%load_ext autoreload %autoreload 2 #Load our data from omicexperiment.experiment.microbiome import MicrobiomeExperiment mapping = "example_map.tsv" biom = "example_fungal.biom" tax = "blast_tax_assignments.txt" exp = MicrobiomeExperiment(biom, mapping,tax) """ Explanation: Experiment objects filters - the rationale A syntactic sugar API is provided for various common filtering operations on the components of the experiment objects (for example the three dataframes of the MicrobiomeExperiment object). The rationale behind providing such syntactic sugar in the API is that working with three dataframes at the same time can be taxing, as it describes multidimensional data (and metadata!). Again, rapid analysis and economy in typing is the ultimate aspiration here, avoiding repetitive boilerplate, especially knowing that almost the same operations are required in particular downstream analyses in various typical omic experiments. This notebook/chapter will provide various examples (currently for MicrobiomeExperiment), and can be regarded as a cookbook for various operations performed in a microbial amplicon metabarcoding experiment. End of explanation """ exp.data_df exp.mapping_df """ Explanation: Experiment filters The basis of the filter functionality are two methods on the experiment objects. The first method is called 'filter'. The second method is called 'efilter' (short for "experiment filter"). The filter method basically filters the data_df according to the parameters passed as we are going to explain below. It is important to remember that the data_df is our main dataframe (our contingency table or matrix) and therefore remember that the filter function only filters the data_df and not for example the mapping_df. The method then returns a new pandas DataFrame object (the new "filtered" or "modified" data_df). The efilter method provides the same functionality as filter. The only difference is that efilter follows the paradigm of providing a whole new experiment object, rather than just providing a stand-alone new data DataFrame object. As explained before, this paradigm is helpful as it allows method chaining etc. The filter subpackage From the filters subpackage, you can import the various filters: from omicexperiment.transforms.filters import Sample from omicexperiment.transforms.filters import Observation from omicexperiment.transforms.filters import Taxonomy These "filters" are also provided on the MicrobiomeExperiment object, as shortcuts. Taxonomy = exp.Taxonomy #OR from omicexperiment.transforms.filters import Taxonomy What are filters? Filters are basically classes (can be objects/instances), that hold attributes that are subclasses of the FilterExpression object. FilterExpressions can be considered fairly magical, as they utilize operator overloading in an attempt to provide a shorthand API with a sugary syntax for applying various operations on the experiment dataframe objects. The three filters * Taxonomy filter: apply various operations on the taxonomy * Sample filter: apply various operations on samples/sample metadata * Observation filter: apply various operations on observations (i.e. OTUs in a microbiome context) The only way to get the gist of how these work is perhaps to view the code examples. End of explanation """ Sample = exp.Sample #OR from omicexperiment.transforms.filters import Sample #1. the count filter exp.dapply(Sample.count > 90000) #note sample0 was filtered off as count = 86870 # this filter implements various operators #the count filter implements various operators (due to the FlexibleOperator mixin) #here we try the __eq__ (==) operator, the cell above we tried the > operator exp.dapply(Sample.count == 100428) #2. the att (attribute) filter # this filters on the "attributes" (i.e. metadata) of the samples # present in the mapping dataframe # this uses an attribute access (dotted) syntax #here we only select samples in the 'control' group exp.dapply(Sample.att.group == 'control') #only one sample in this group #select only samples of asthmatic patients exp.dapply(Sample.att.asthma == 1) #only three asthma-positive samples #another alias for the att filter is the c attribute on the Sample Filter #(c is short for "column", as per sqlalchemy convention) exp.dapply(Sample.c.asthma == 1) #only three asthma-positive samples #some columns may not be legal python attribute names, #so for these we allow the [] (__getitem__) syntax exp.dapply(Sample.att['#SampleID'] == 'sample0') """ Explanation: Sample Filter examples End of explanation """ exp.apply(Sample.c.asthma == 1).dapply(Sample.count > 100000) #two samples # the Sample groupby filter #the aggregate function here is the mean, #then finally normalizes to a 100 (mean relative abundance) exp.dapply(Sample.groupby("group")) # the Sample groupby_sum filter #the aggregate function here is the sum -- no normalization is applied exp.dapply(Sample.groupby_sum("group")) """ Explanation: An example of method chaining using efilter instead of filter End of explanation """ Taxonomy = exp.Taxonomy #OR from omicexperiment.transforms.filters import Taxonomy exp.taxonomy_df #1. the taxonomy groupby filter #this filter is very important, as it is used to collapse otus by their taxonomies #according to the taxonomic rank asked for exp.dapply(Taxonomy.groupby('genus')) #any taxonomic rank can be passed ''' We noticed above that one of the assignments was identified at a highest resolution only at the family level. We can utilize 2. the taxonomy attribute filters to remove these OTUs that were classified at a lower resolution than a genus before continuing with downstream analyses. ''' genus_or_higher = exp.apply(Taxonomy.rank_resolution >= 'genus') #note efilter genus_or_higher.apply(Taxonomy.groupby('genus')).data_df #Another example of the various Taxonomy attribute filters exp.dapply(Taxonomy.genus == 'g__Aspergillus') #only three otus had a genus assigned as 'g__Aspergillus" """ Explanation: Taxonomy filters Taxonomy filters allows common operations done on the taxonomy metadata of the Observations/OTUs. End of explanation """
catalystcomputing/DSIoT-Python-sessions
Session4/code/02 Decision Tree Classifier - random_state.ipynb
apache-2.0
# Imports from sklearn import metrics from sklearn.tree import DecisionTreeClassifier import pandas as pd # Training Data training_raw = pd.read_table("../data/training_data.dat") df_training = pd.DataFrame(training_raw) # test Data test_raw = pd.read_table("../data/test_data.dat") df_test = pd.DataFrame(test_raw) # target names target_categories = ['Unclassified','Art','Aviation','Boating','Camping /Walking /Climbing','Collecting'] # Extract target results from panda target = df_training["CategoryID"].values # Create classifier class model_dtc = DecisionTreeClassifier() # features feature_names_integers = ['Barcode','UnitRRP'] # Extra features from panda (without description) training_data_integers = df_training[feature_names_integers].values training_data_integers[:3] # train model model_dtc.fit(training_data_integers, target) # Extract test data and test the model test_data_integers = df_test[feature_names_integers].values test_target = df_test["CategoryID"].values expected = test_target predicted_dtc = model_dtc.predict(test_data_integers) print(metrics.classification_report(expected, predicted_dtc, target_names=target_categories)) print(metrics.confusion_matrix(expected, predicted_dtc)) metrics.accuracy_score(expected, predicted_dtc, normalize=True, sample_weight=None) """ Explanation: Decision Tree Classifier - random_state In the previous notebook we got an accuracy score of just over 40%. Lets just do that again. End of explanation """ model_dtc = DecisionTreeClassifier() model_dtc.fit(training_data_integers, target) predicted_dtc = model_dtc.predict(test_data_integers) metrics.accuracy_score(expected, predicted_dtc, normalize=True, sample_weight=None) """ Explanation: and again. End of explanation """ model_dtc = DecisionTreeClassifier() model_dtc.fit(training_data_integers, target) predicted_dtc = model_dtc.predict(test_data_integers) metrics.accuracy_score(expected, predicted_dtc, normalize=True, sample_weight=None) """ Explanation: one more time :) End of explanation """ model_dtc = DecisionTreeClassifier(random_state=511) model_dtc.fit(training_data_integers, target) predicted_dtc = model_dtc.predict(test_data_integers) metrics.accuracy_score(expected, predicted_dtc, normalize=True, sample_weight=None) model_dtc = DecisionTreeClassifier(random_state=511) model_dtc.fit(training_data_integers, target) predicted_dtc = model_dtc.predict(test_data_integers) metrics.accuracy_score(expected, predicted_dtc, normalize=True, sample_weight=None) """ Explanation: We see that the results are not the same. This is because the Decision Tree Classifier chooses a feature at random in order to try to avoid overfitting. As we are about to start trying to improve the results by trying different strategies of preparing and loading data having varying will be unhelpful. To aviod this we can manually set the random_state. End of explanation """
stbaercom/europython2015_logging
europython_2015_logging_talk.ipynb
mit
from datetime import datetime def my_division_p(dividend, divisor): try: print("Debug, Division : {}/{}".format(dividend,divisor)) result = dividend / divisor return result except (ZeroDivisionError, TypeError): print("Error, Division Failed") return None def division_task_handler_p(task): print("Handling division task,{} items".format(len(task))) result = [] for i, task in enumerate(task): print("Doing devision iteration {} on {:%Y}".format(i,datetime.now())) dividend, divisor = task result.append(my_division_p(dividend,divisor)) return result """ Explanation: <center> <img width="95%" src="figures/europython_title_img.png"/> </center> Agenda Why Logging How does Logging work for you? Optional Content The Presentation The slides, support code and jypyter notebook are on Github https://github.com/stbaercom/europython2015_logging A Simple Program, Without any Logging End of explanation """ task = [(3,4),(5,1.4),(2,0),(3,5),("10",1)] division_task_handler_p(task) """ Explanation: Let us Have a Look at the Output End of explanation """ import log1; logging = log1.get_clean_logging() logging.basicConfig(level=logging.DEBUG) log = logging.getLogger() def my_division(dividend, divisor): try: log.debug("Division : %s/%s", dividend, divisor) result = dividend / divisor return result except (ZeroDivisionError, TypeError): log.exception("Error, Division Failed") return None def division_task_handler(task): log.info("Handling division task,%s items",len(task)) result = [] for i, task in enumerate(task): log.info("Doing devision iteration %s",i) dividend, divisor = task result.append(my_division(dividend,divisor)) return result """ Explanation: The Problems with print() We don't have a way to select the types of messages we are interested in We have to add all information (timestamps, etc...) by ourselves All our messages will look slightly different We have only limited control where our message end up What is Different with Logging? We have more structure, and easier parsing The logging module provides some extra informaiton (Logger, Level, and Formating) We Handling of exception essentially for free. Aspects of a Logging Message <center> <img width="95%" src="figures/DimensionsLogging.png"/> </center> Using the Logging Module for Comparison End of explanation """ task = [(3,4),(2,0),(3,5),("10",1)] division_task_handler(task) """ Explanation: The Call and the Log Messages End of explanation """ import log1;logging = log1.get_clean_logging() # this would be import logging outside this notebook logging.debug("Find me in the log") logging.info("I am hidden") logging.warn("I am here") logging.error("As am I") try: 1/0; except: logging.exception(" And I") logging.critical("Me, of course") """ Explanation: How does the Logging Module represent these Aspect <center> <img width="90%" src="figures/DimensionsLoggingImp.png"/> </center> Back to Code. How does Logging Work? End of explanation """ import log1;logging = log1.get_clean_logging() datefmt = "%Y-%m-%d %H:%M:%S" msgfmt = "%(asctime)s,%(msecs)03d %(levelname)-10s %(name)-15s : %(message)s" logging.basicConfig(level=logging.DEBUG, format=msgfmt, datefmt=datefmt) logging.debug("Now I show up ") logging.info("Now this is %s logging!","good") logging.warn("I am here. %-4i + %-4i = %i",1,3,1+3) logging.error("As am I") try: 1/0; except: logging.exception(" And I") """ Explanation: More Complex Logging Setup with basicConfig() End of explanation """ import log1, json, logging.config;logging = log1.get_clean_logging() datefmt = "%Y-%m-%d %H:%M:%S" msgfmt = "%(asctime)s,%(msecs)03d %(levelname)-6s %(name)-10s : %(message)s" log = logging.getLogger() log.setLevel(logging.DEBUG) lh = logging.StreamHandler() lf = logging.Formatter(fmt=msgfmt, datefmt=datefmt) lh.setFormatter(lf) log.addHandler(lh) log.info("Now this is %s logging!","good") log.debug("A slightly more complex message %s + %s = %s",1,2,1+2) """ Explanation: Some (personal) Remarks about basicConfig() basicConfig() does save you some typing, but I would go for the 'normal' setup. Using basicConfig() is a matter of personal taste. The normal setup makes the structure clearer. Keep in mind that basicConfig() is meant to be called once... Using the Standard Configuration End of explanation """ import log1, json, logging.config;logging = log1.get_clean_logging() conf_dict = { 'version': 1, 'disable_existing_loggers': True, 'formatters': { 'longformat': { 'format': "%(asctime)s,%(msecs)03d %(levelname)-10s %(name)-15s : %(message)s", 'datefmt': "%Y-%m-%d %H:%M:%S"}}, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'formatter': "longformat"}}, 'loggers':{ '': { 'level': 'DEBUG', 'handlers': ['console']}}} logging.config.dictConfig(conf_dict) log = logging.getLogger() log.info("Now this is %s logging!","good") """ Explanation: Now, back to the Theory. What have we Build? <center> <img src="figures/LogTree_Basic.png" width="90%"/> </center> How do we get from the Configuration to the Log Message? <center> <img src="figures/Format.png" width="95%"/> </center> Formatting : Attributes Available for the Logging Call <table height="80%" class="bst"><tr><th>Attribute</th><th>Description</th></tr><tr><td>args</td><td>Tuple of arguments passed to the logging call</td></tr><tr><td>asctime</td><td>Log record creation time, formatted</td></tr><tr><td>created</td><td>Log record creation time, seconds since the Epoch</td></tr><tr><td>exc_info</td><td>Exception information / stack trace, if any</td></tr><tr><td>filename</td><td>Filename portion of pathname for the logging module</td></tr><tr><td>funcName</td><td>Name of function containing the logging call</td></tr><tr><td>levelname</td><td>Name of Logging Level</td></tr><tr><td>levelno</td><td>Number of Logging Level</td></tr><tr><td>lineno</td><td>Line number in source code for the logging call</td></tr><tr><td>module</td><td>Module (name portion of filename).</td></tr><tr><td>message</td><td>Logged message</td></tr><tr><td>name</td><td>Name of the logger used to log the call.</td></tr><tr><td>pathname</td><td>pathname of source file</td></tr><tr><td>process</td><td>Process ID</td></tr><tr><td>processName</td><td>Process name</td></tr><tr><td>...</td><td>...</td></tr></table> Using dictConfig() End of explanation """ import log1, json, logging.config;logging = log1.get_clean_logging() base_config = json.load(open("conf_dict.json")) base_config['handlers']['logfile'] = { 'class' : 'logging.FileHandler', 'mode' : 'w', 'filename' : 'logfile.txt', 'formatter': "longformat"} base_config['loggers']['']['handlers'].append('logfile') logging.config.dictConfig(base_config) log = logging.getLogger() log.info("Now this is %s logging!","good") !cat logfile.txt """ Explanation: Adding a Filehandler to the Logger End of explanation """ import log1, json, logging.config;logging = log1.get_clean_logging() file_config = json.load(open("conf_dict_with_file.json")) file_config['handlers']['logfile']['level'] = "WARN" logging.config.dictConfig(file_config) log = logging.getLogger() log.info("Now this is %s logging!","good") log.warning("Now this is %s logging!","worrisome") !cat logfile.txt """ Explanation: ## Another look at the logging object tree <center> <img src="figures/LogTree_File.png" width="80%"/> </center> Set the Level on the FileHandler End of explanation """ import log1,json,logging.config;logging = log1.get_clean_logging() logging.config.dictConfig(json.load(open("conf_dict.json"))) log = logging.getLogger("") child_A = logging.getLogger("A") child_B = logging.getLogger("B") child_B_A = logging.getLogger("B.A") log.info("Now this is %s logging!","good") child_A.info("Now this is more logging!") log.warning("Now this is %s logging!","worrisome") """ Explanation: Adding Child Loggers under the Root End of explanation """ import log1,json,logging.config;logging = log1.get_clean_logging() logging.config.dictConfig(json.load(open("conf_dict.json"))) def log_filter(rec): # Callables work with 3.2 and later if 'please' in rec.msg.lower(): return True return False log = logging.getLogger("") log.addFilter(log_filter) child_A = logging.getLogger("A") log.info("Just log me") child_A.info("Just log me") log.info("Hallo, Please log me") """ Explanation: ## Looking at the tree of Logging Objects <center> <img src="figures/LogTree_Full.png" width="90%"/> </center> Best Practices for the Logging Tree Use .getLogger(__name__) per module to define loggers under the root logger Set propagate to True on each Logger Attach Handlers and Filters as needed to control output from the Logging hierarchy Filter - Now that things are Getting Complicated With more loggers and handlers in the tree of logging objects, things are getting complicated We may not want every logger to send log records to every filter The logging level gives us some control, there are limits Filters are one solution to this problem Filter can also add information to records, thus helping with structured logging Using Filters <center> <img src="figures/LogTree_Filter.png" width="80%"/> </center> An Example for using Filter Objects End of explanation """ import log1, json, logging.config;logging = log1.get_clean_logging() datefmt = "%Y-%m-%d %H:%M:%S" msgfmt = "%(asctime)s,%(msecs)03d %(levelname)-6s %(name)-10s : %(message)s" log_reg = None def handler_filter(rec): # Callables work with 3.2 and later global log_reg if 'please' in rec.msg.lower(): rec.msg = rec.msg + " (I am nice)" # Changing the record rec.args = (rec.args[0].upper(), rec.args[1] + 10) rec.__dict__['custom_name'] = "Important context information" log_reg = rec return True return False log = logging.getLogger() lh = logging.StreamHandler() lf = logging.Formatter(fmt=msgfmt, datefmt=datefmt) lh.setFormatter(lf) log.addHandler(lh) lh.addFilter(handler_filter) log.warn("I am a bold Logger","good") log.warn("Hi, I am %s. I am %i seconds old. Please log me","Loggy", 1) """ Explanation: The Way of a Logging Record <center> <img src="figures/LoggingFlow.png" width="100%"/> </center> A second Example for Filters, in the LogHandler End of explanation """ print(log_reg) log_reg.__dict__ """ Explanation: Things you might want to know ( if we still have some time) A short look at our LogRecord End of explanation """ import json, logging.config config = json.load(open("conf_dict_with_file.json")) logging.config.dictConfig(config) import requests import logging_tree logging_tree.printout() """ Explanation: Logging Performance - Slow, but Fast Enough <table class="bst"><tr><th>Scenario (10000 Call, 3 Logs per call)</th><th>Runtime</th></tr><tr><td>Full Logging with buffered writes</td><td>3.096s</td></tr><tr><td>Disable Caller information</td><td>2.868s</td></tr><tr><td>Check Logging Lvl before Call, Logging disabled</td><td>0.186s</td></tr><tr><td>Logging module level disabled</td><td>0.181s</td></tr><tr><td>No Logging calls at all</td><td>0.157s</td></tr></table> Getting the current Logging Tree End of explanation """ import log1,json,logging,logging.config;logging = log1.get_clean_logging() #Load Config, define a child logger (could also be a module) logging.config.dictConfig(json.load(open("conf_dict_with_file.json"))) child_log = logging.getLogger("somewhere") #Reload Config logging.config.dictConfig(json.load(open("conf_dict_with_file.json"))) #Our childlogger was disables child_log.info("Now this is %s logging!","good") """ Explanation: Reconfiguration It is possible to change the logging configuration at runtime It is even part of the standard library Still, some caution is in order Reloading the configuration can disable the existing loggers End of explanation """ import log1, json, logging, logging.config;logging = log1.get_clean_logging() config = json.load(open("conf_dict_with_file.json")) #Load Config, define a child logger (could also be a module) logging.config.dictConfig(config) child_log = logging.getLogger("somewhere") config['disable_existing_loggers'] = False #Reload Config logging.config.dictConfig(config) #Our childlogger was disables child_log.info("Now this is %s logging!","good") """ Explanation: Reloading can happen in place End of explanation """ from presentation_helper import customize_settings customize_settings() """ Explanation: Successful Logging to all of You End of explanation """
marcelomiky/PythonCodes
Coursera/CICCP2/.ipynb_checkpoints/Curso Introdução à Ciência da Computação com Python - Parte 2-checkpoint.ipynb
mit
def cria_matriz(tot_lin, tot_col, valor): matriz = [] #lista vazia for i in range(tot_lin): linha = [] for j in range(tot_col): linha.append(valor) matriz.append(linha) return matriz x = cria_matriz(2, 3, 99) x def cria_matriz(tot_lin, tot_col, valor): matriz = [] #lista vazia for i in range(tot_lin): linha = [] for j in range(tot_col): linha.append(valor) matriz.append(linha) return matriz x = cria_matriz(2, 3, 99) x """ Explanation: Semana 1 End of explanation """ def cria_matriz(num_linhas, num_colunas): matriz = [] #lista vazia for i in range(num_linhas): linha = [] for j in range(num_colunas): linha.append(0) matriz.append(linha) for i in range(num_colunas): for j in range(num_linhas): matriz[j][i] = int(input("Digite o elemento [" + str(j) + "][" + str(i) + "]: ")) return matriz x = cria_matriz(2, 3) x def tarefa(mat): dim = len(mat) for i in range(dim): print(mat[i][dim-1-i], end=" ") mat = [[1,2,3],[4,5,6],[7,8,9]] tarefa(mat) # Observação: o trecho do print (end = " ") irá mudar a finalização padrão do print # que é pular para a próxima linha. Com esta mudança, o cursor permanecerá na mesma # linha aguardando a impressão seguinte. """ Explanation: Este código faz com que primeiramente toda a primeira linha seja preenchida, em seguida a segunda e assim sucessivamente. Se nós quiséssemos que a primeira coluna fosse preenchida e em seguida a segunda coluna e assim por diante, como ficaria o código? Um exemplo: se o usuário digitasse o seguinte comando “x = cria_matriz(2,3)” e em seguida informasse os seis números para serem armazenados na matriz, na seguinte ordem: 1, 2, 3, 4, 5, 6; o x teria ao final da função a seguinte matriz: [[1, 3, 5], [2, 4, 6]]. End of explanation """ def dimensoes(A): '''Função que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj. Obs: i = colunas, j = linhas Exemplo: >>> minha_matriz = [[1], [2], [3] ] >>> dimensoes(minha_matriz) >>> 3X1 ''' lin = len(A) col = len(A[0]) return print("%dX%d" % (lin, col)) matriz1 = [[1], [2], [3]] dimensoes(matriz1) matriz2 = [[1, 2, 3], [4, 5, 6]] dimensoes(matriz2) """ Explanation: Exercício 1: Tamanho da matriz Escreva uma função dimensoes(matriz) que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj. Exemplos: minha_matriz = [[1], [2], [3]] dimensoes(minha_matriz) 3X1 minha_matriz = [[1, 2, 3], [4, 5, 6]] dimensoes(minha_matriz) 2X3 End of explanation """ def soma_matrizes(m1, m2): def dimensoes(A): lin = len(A) col = len(A[0]) return ((lin, col)) if dimensoes(m1) != dimensoes(m2): return False else: matriz = [] for i in range(len(m1)): linha = [] for j in range(len(m1[0])): linha.append(m1[i][j] + m2[i][j]) matriz.append(linha) return matriz m1 = [[1, 2, 3], [4, 5, 6]] m2 = [[2, 3, 4], [5, 6, 7]] soma_matrizes(m1, m2) m1 = [[1], [2], [3]] m2 = [[2, 3, 4], [5, 6, 7]] soma_matrizes(m1, m2) """ Explanation: Exercício 2: Soma de matrizes Escreva a função soma_matrizes(m1, m2) que recebe 2 matrizes e devolve uma matriz que represente sua soma caso as matrizes tenham dimensões iguais. Caso contrário, a função deve devolver False. Exemplos: m1 = [[1, 2, 3], [4, 5, 6]] m2 = [[2, 3, 4], [5, 6, 7]] soma_matrizes(m1, m2) => [[3, 5, 7], [9, 11, 13]] m1 = [[1], [2], [3]] m2 = [[2, 3, 4], [5, 6, 7]] soma_matrizes(m1, m2) => False End of explanation """ def imprime_matriz(A): for i in range(len(A)): for j in range(len(A[i])): print(A[i][j]) minha_matriz = [[1], [2], [3]] imprime_matriz(minha_matriz) minha_matriz = [[1, 2, 3], [4, 5, 6]] imprime_matriz(minha_matriz) """ Explanation: Praticar tarefa de programação: Exercícios adicionais (opcionais) Exercício 1: Imprimindo matrizes Como proposto na primeira vídeo-aula da semana, escreva uma função imprime_matriz(matriz), que recebe uma matriz como parâmetro e imprime a matriz, linha por linha. Note que NÃO se deve imprimir espaços após o último elemento de cada linha! Exemplos: minha_matriz = [[1], [2], [3]] imprime_matriz(minha_matriz) 1 2 3 minha_matriz = [[1, 2, 3], [4, 5, 6]] imprime_matriz(minha_matriz) 1 2 3 4 5 6 End of explanation """ def sao_multiplicaveis(m1, m2): '''Recebe duas matrizes como parâmetros e devolve True se as matrizes forem multiplicáveis (número de colunas da primeira é igual ao número de linhs da segunda). False se não forem ''' if len(m1) == len(m2[0]): return True else: return False m1 = [[1, 2, 3], [4, 5, 6]] m2 = [[2, 3, 4], [5, 6, 7]] sao_multiplicaveis(m1, m2) m1 = [[1], [2], [3]] m2 = [[1, 2, 3]] sao_multiplicaveis(m1, m2) """ Explanation: Exercício 2: Matrizes multiplicáveis Duas matrizes são multiplicáveis se o número de colunas da primeira é igual ao número de linhas da segunda. Escreva a função sao_multiplicaveis(m1, m2) que recebe duas matrizes como parâmetro e devolve True se as matrizes forem multiplicavéis (na ordem dada) e False caso contrário. Exemplos: m1 = [[1, 2, 3], [4, 5, 6]] m2 = [[2, 3, 4], [5, 6, 7]] sao_multiplicaveis(m1, m2) => False m1 = [[1], [2], [3]] m2 = [[1, 2, 3]] sao_multiplicaveis(m1, m2) => True End of explanation """ "áurea gosta de coentro".capitalize() "AQUI".capitalize() # função para remover espaços em branco " email@company.com ".strip() "o abecedário da Xuxa é didático".count("a") "o abecedário da Xuxa é didático".count("á") "o abecedário da Xuxa é didático".count("X") "o abecedário da Xuxa é didático".count("x") "o abecedário da Xuxa é didático".count("z") "A vida como ela seje".replace("seje", "é") "áurea gosta de coentro".capitalize().center(80) #80 caracteres de largura, no centro apareça este texto texto = "Ao que se percebe, só há o agora" texto texto.find("q") texto.find('se') texto[7] + texto[8] texto.find('w') fruta = 'amora' fruta[:4] # desde o começo até a posição TRÊS! fruta[1:] # desde a posição 1 (começa no zero) até o final fruta[2:4] # desde a posição 2 até a posição 3 """ Explanation: Semana 2 End of explanation """ def mais_curto(lista_de_nomes): menor = lista_de_nomes[0] # considerando que o menor nome está no primeiro lugar for i in lista_de_nomes: if len(i) < len(menor): menor = i return menor.capitalize() lista = ['carlos', 'césar', 'ana', 'vicente', 'maicon', 'washington'] mais_curto(lista) ord('a') ord('A') ord('b') ord('m') ord('M') ord('AA') 'maçã' > 'banana' 'Maçã' > 'banana' 'Maçã'.lower() > 'banana'.lower() txt = 'José' txt = txt.lower() txt lista = ['ana', 'maria', 'José', 'Valdemar'] len(lista) lista[3].lower() lista[2] lista[2] = lista[2].lower() lista for i in lista: print(i) lista[0][0] """ Explanation: Exercício Escrever uma função que recebe uma lista de Strings contendo nomes de pessoas como parâmetro e devolve o nome mais curto. A função deve ignorar espaços antes e depois do nome e deve devolver o nome com a primeira letra maiúscula. End of explanation """ def menor_string(array_string): for i in range(len(array_string)): array_string[i] = array_string[i].lower() menor = array_string[0] # considera o primeiro como o menor for i in array_string: if ord(i[0][0]) < ord(menor[0]): menor = i return menor lista = ['maria', 'José', 'Valdemar'] menor_string(lista) # Código para inverter string e deixa maiúsculo def fazAlgo(string): pos = len(string)-1 string = string.upper() while pos >= 0: print(string[pos],end = "") pos = pos - 1 fazAlgo("paralelepipedo") # Código que deixa maiúsculo as letras de ordem ímpar: def fazAlgo(string): pos = 0 string1 = "" string = string.lower() stringMa = string.upper() while pos < len(string): if pos % 2 == 0: string1 = string1 + stringMa[pos] else: string1 = string1 + string[pos] pos = pos + 1 return string1 print(fazAlgo("paralelepipedo")) # Código que tira os espaços em branco def fazAlgo(string): pos = 0 string1 = "" while pos < len(string): if string[pos] != " ": string1 = string1 + string[pos] pos = pos + 1 return string1 print(fazAlgo("ISTO É UM TESTE")) # e para retornar "Istoéumteste", ou seja, só deixar a primeira letra maiúscula... def fazAlgo(string): pos = 0 string1 = "" while pos < len(string): if string[pos] != " ": string1 = string1 + string[pos] pos = pos + 1 string1 = string1.capitalize() return string1 print(fazAlgo("ISTO É UM TESTE")) x, y = 10, 20 x, y x y def peso_altura(): return 77, 1.83 peso_altura() peso, altura = peso_altura() peso altura # Atribuição múltipla em C (vacas magras...) ''' int a, b, temp a = 10 b = 20 temp = a a = b b = temp ''' a, b = 10, 20 a, b = b, a a, b # Atribuição aumentada x = 10 x = x + 10 x x = 10 x += 10 x x = 3 x *= 2 x x = 2 x **= 10 x x = 100 x /= 3 x def pagamento_semanal(valor_por_hora, num_horas = 40): return valor_por_hora * num_horas pagamento_semanal(10) pagamento_semanal(10, 20) # aceita, mesmo assim, o segundo parâmetro. # Asserção de Invariantes def pagamento_semanal(valor_por_hora, num_horas = 40): assert valor_por_hora >= 0 and num_horas > 0 return valor_por_hora * num_horas pagamento_semanal(30, 10) pagamento_semanal(10, -10) x, y = 10, 12 x, y = y, x print("x = ",x,"e y = ",y) x = 10 x += 10 x /= 2 x //= 3 x %= 2 x *= 9 print(x) def calculo(x, y = 10, z = 5): return x + y * z; calculo(1, 2, 3) calculo(1, 2) # 2 entra em y. def calculo(x, y = 10, z = 5): return x + y * z; print(calculo(1, 2, 3)) calculo() print(calculo( ,12, 10)) def horario_em_segundos(h, m, s): assert h >= 0 and m >= 0 and s >= 0 return h * 3600 + m * 60 + s print(horario_em_segundos (3,0,50)) print(horario_em_segundos(1,2,3)) print(horario_em_segundos (-1,20,30)) # Módulos em Python def fib(n): # escreve a série de Fibonacci até n a, b = 0, 1 while b < n: print(b, end = ' ') a, b = b, a + b print() def fib2(n): result = [] a, b = 0, 1 while b < n: result.append(b) a, b = b, a + b return result ''' E no shell do Python (chamado na pasta que contém o arquivo fibo.py) >>> import fibo >>> fibo.fib(100) 1 1 2 3 5 8 13 21 34 55 89 >>> fibo.fib2(100) [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89] >>> fibo.fib2(1000) [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987] >>> meuFib = fibo.fib >>> meuFib(20) 1 1 2 3 5 8 13 ''' """ Explanation: Exercício Escreva uma função que recebe um array de strings como parâmetro e devolve o primeiro string na ordem lexicográfica, ignorando-se maiúsculas e minúsculas End of explanation """ def fazAlgo(string): # inverte a string e deixa as vogais maiúsculas pos = len(string)-1 # define a variável posição do array stringMi = string.lower() # aqui estão todas minúsculas string = string.upper() # aqui estão todas maiúsculas stringRe = "" # string de retorno while pos >= 0: if string[pos] == 'A' or string[pos] == 'E' or string[pos] == 'I' or string[pos] == 'O' or string[pos] == 'U': stringRe = stringRe + string[pos] else: stringRe = stringRe + stringMi[pos] pos = pos - 1 return stringRe if __name__ == "__main__": print(fazAlgo("teste")) print(fazAlgo("o ovo do avestruz")) print(fazAlgo("A CASA MUITO ENGRAÇADA")) print(fazAlgo("A TELEvisão queBROU")) print(fazAlgo("A Vaca Amarela")) """ Explanation: Incluindo <pre>print(__name__)</pre> na última linha de fibo.py, ao fazer a importação import fibo no shell do Python, imprime 'fibo', que é o nome do programa. Ao incluir <pre> if __name__ == "__main__": import sys fib(int(sys.argv[1])) </pre> podemos ver se está sendo executado como script (com o if do jeito que está) ou como módulo dentro de outro código (se o nome não for main, está sendo importado pra usar alguma função lá dentro). End of explanation """ maiusculas('Programamos em python 2?') # deve devolver 'P' maiusculas('Programamos em Python 3.') # deve devolver 'PP' maiusculas('PrOgRaMaMoS em python!') # deve devolver 'PORMMS' def maiusculas(frase): listRe = [] # lista de retorno vazia stringRe = '' # string de retorno vazia for ch in frase: if ord(ch) >=65 and ord(ch) <= 91: listRe.append(ch) # retornando a lista para string stringRe = ''.join(listRe) return stringRe maiusculas('Programamos em python 2?') maiusculas('Programamos em Python 3.') maiusculas('PrOgRaMaMoS em python!') x = ord('A') y = ord('a') x, y ord('B') ord('Z') """ Explanation: Exercício 1: Letras maiúsculas Escreva a função maiusculas(frase) que recebe uma frase (uma string) como parâmetro e devolve uma string com as letras maiúsculas que existem nesta frase, na ordem em que elas aparecem. Para resolver este exercício, pode ser útil verificar uma tabela ASCII, que contém os valores de cada caractere. Ver http://equipe.nce.ufrj.br/adriano/c/apostila/tabascii.htm Note que para simplificar a solução do exercício, as frases passadas para a sua função não possuirão caracteres que não estejam presentes na tabela ASCII apresentada, como ç, á, É, ã, etc. Dica: Os valores apresentados na tabela são os mesmos devolvidos pela função ord apresentada nas aulas. Exemplos: End of explanation """ menor_nome(['maria', 'josé', 'PAULO', 'Catarina']) # deve devolver 'José' menor_nome(['maria', ' josé ', ' PAULO', 'Catarina ']) # deve devolver 'José' menor_nome(['Bárbara', 'JOSÉ ', 'Bill']) # deve devolver José def menor_nome(nomes): tamanho = len(nomes) # pega a quantidade de nomes na lista menor = '' # variável para escolher o menor nome lista_limpa = [] # lista de nomes sem os espaços em branco # ignora espaços em branco for str in nomes: lista_limpa.append(str.strip()) # verifica o menor nome menor = lista_limpa[0] # considera o primeiro como menor for str in lista_limpa: if len(str) < len(menor): # não deixei <= senão pegará um segundo menor de mesmo tamanho menor = str return menor.capitalize() # deixa a primeira letra maiúscula menor_nome(['maria', 'josé', 'PAULO', 'Catarina']) # deve devolver 'José' menor_nome(['maria', ' josé ', ' PAULO', 'Catarina ']) # deve devolver 'José' menor_nome(['Bárbara', 'JOSÉ ', 'Bill']) # deve devolver José menor_nome(['Bárbara', 'JOSÉ ', 'Bill', ' aDa ']) """ Explanation: Exercício 2: Menor nome Como pedido no primeiro vídeo desta semana, escreva uma função menor_nome(nomes) que recebe uma lista de strings com nome de pessoas como parâmetro e devolve o nome mais curto presente na lista. A função deve ignorar espaços antes e depois do nome e deve devolver o menor nome presente na lista. Este nome deve ser devolvido com a primeira letra maiúscula e seus demais caracteres minúsculos, independente de como tenha sido apresentado na lista passada para a função. Quando houver mais de um nome com o menor comprimento dentre os nomes na lista, a função deve devolver o primeiro nome com o menor comprimento presente na lista. Exemplos: End of explanation """ def conta_letras(frase, contar = 'vogais'): pos = len(frase) - 1 # atribui na variável pos (posição) a posição do array count = 0 # define o contador de vogais while pos >= 0: # conta as vogais if frase[pos] == 'a' or frase[pos] == 'e' or frase[pos] == 'i' or frase[pos] == 'o' or frase[pos] == 'u': count += 1 pos = pos - 1 if contar == 'consoantes': frase = frase.replace(' ', '') # retira espaços em branco return len(frase) - count # subtrai do total as vogais else: return count conta_letras('programamos em python') conta_letras('programamos em python', 'vogais') conta_letras('programamos em python', 'consoantes') conta_letras('bcdfghjklmnpqrstvxywz', 'consoantes') len('programamos em python') frase = 'programamos em python' frase.replace(' ', '') frase """ Explanation: Exercícios adicionais Exercício 1: Contando vogais ou consoantes Escreva a função conta_letras(frase, contar="vogais"), que recebe como primeiro parâmetro uma string contendo uma frase e como segundo parâmetro uma outra string. Este segundo parâmetro deve ser opcional. Quando o segundo parâmetro for definido como "vogais", a função deve devolver o numero de vogais presentes na frase. Quando ele for definido como "consoantes", a função deve devolver o número de consoantes presentes na frase. Se este parâmetro não for passado para a função, deve-se assumir o valor "vogais" para o parâmetro. Exemplos: conta_letras('programamos em python') 6 conta_letras('programamos em python', 'vogais') 6 conta_letras('programamos em python', 'consoantes') 13 End of explanation """ def primeiro_lex(lista): resposta = lista[0] # define o primeiro item da lista como a resposta...mas verifica depois. for str in lista: if ord(str[0]) < ord(resposta[0]): resposta = str return resposta assert primeiro_lex(['oĺá', 'A', 'a', 'casa']), 'A' assert primeiro_lex(['AAAAAA', 'b']), 'AAAAAA' primeiro_lex(['casa', 'a', 'Z', 'A']) primeiro_lex(['AAAAAA', 'b']) """ Explanation: Exercício 2: Ordem lexicográfica Como pedido no segundo vídeo da semana, escreva a função primeiro_lex(lista) que recebe uma lista de strings como parâmetro e devolve o primeiro string na ordem lexicográfica. Neste exercício, considere letras maiúsculas e minúsculas. Dica: revise a segunda vídeo-aula desta semana. Exemplos: primeiro_lex(['oĺá', 'A', 'a', 'casa']) 'A' primeiro_lex(['AAAAAA', 'b']) 'AAAAAA' End of explanation """ def cria_matriz(tot_lin, tot_col, valor): matriz = [] #lista vazia for i in range(tot_lin): linha = [] for j in range(tot_col): linha.append(valor) matriz.append(linha) return matriz # import matriz # descomentar apenas no arquivo .py def soma_matrizes(A, B): num_lin = len(A) num_col = len(A[0]) C = cria_matriz(num_lin, num_col, 0) # matriz com zeros for lin in range(num_lin): # percorre as linhas da matriz for col in range(num_col): # percorre as colunas da matriz C[lin][col] = A[lin][col] + B[lin][col] return C if __name__ == '__main__': A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] B = [[10, 20, 30], [40, 50, 60], [70, 80, 90]] print(soma_matrizes(A, B)) # No arquivo matriz.py def cria_matriz(tot_lin, tot_col, valor): matriz = [] #lista vazia for i in range(tot_lin): linha = [] for j in range(tot_col): linha.append(valor) matriz.append(linha) return matriz # E no arquivo soma_matrizes.py import matriz def soma_matrizes(A, B): num_lin = len(A) num_col = len(A[0]) C = matriz.cria_matriz(num_lin, num_col, 0) # matriz com zeros for lin in range(num_lin): # percorre as linhas da matriz for col in range(num_col): # percorre as colunas da matriz C[lin][col] = A[lin][col] + B[lin][col] return C if __name__ == '__main__': A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] B = [[10, 20, 30], [40, 50, 60], [70, 80, 90]] print(soma_matrizes(A, B)) ''' Multiplicação de matrizes: 1 2 3 1 2 22 28 4 5 6 * 3 4 = 49 64 5 6 1*1 + 2*3 + 3*5 = 22 1*2 + 2*4 + 3*6 = 28 4*1 + 5*3 + 6*5 = 49 4*2 + 5*4 + 6*6 = 64 c11 = a11*b11 + a12*b21 + c13*c31 c12 = a11*b21 + a12*b22 + c13*c23 c21 = a21*b11 + a22*b21 + c23*c31 c22 = a21*b21 + a22*b22 + c23*c23 ''' def multiplica_matrizes (A, B): num_linA, num_colA = len(A), len(A[0]) num_linB, num_colB = len(B), len(B[0]) assert num_colA == num_linB C = [] for lin in range(num_linA): # percorre as linhas da matriz A # começando uma nova linha C.append([]) for col in range(num_colB): # percorre as colunas da matriz B # Adicionando uma nova coluna na linha C[lin].append(0) for k in range(num_colA): C[lin][col] += A[lin][k] * B[k][col] return C if __name__ == '__main__': A = [[1, 2, 3], [4, 5, 6]] B = [[1, 2], [3, 4], [5, 6]] print(multiplica_matrizes(A, B)) """ Explanation: Semana 3 - POO – Programação Orientada a Objetos End of explanation """ class Carro: pass meu_carro = Carro() meu_carro carro_do_trabalho = Carro() carro_do_trabalho meu_carro.ano = 1968 meu_carro.modelo = 'Fusca' meu_carro.cor = 'azul' meu_carro.ano meu_carro.cor carro_do_trabalho.ano = 1981 carro_do_trabalho.modelo = 'Brasília' carro_do_trabalho.cor = 'amarela' carro_do_trabalho.ano novo_fusca = meu_carro # duas variáveis apontando para o mesmo objeto novo_fusca #repare que é o mesmo end. de memória novo_fusca.ano += 10 novo_fusca.ano novo_fusca """ Explanation: POO End of explanation """ class Pato: pass pato = Pato() patinho = Pato() if pato == patinho: print("Estamos no mesmo endereço!") else: print("Estamos em endereços diferentes!") class Carro: def __init__(self, modelo, ano, cor): # init é o Construtor da classe self.modelo = modelo self.ano = ano self.cor = cor carro_do_meu_avo = Carro('Ferrari', 1980, 'vermelha') carro_do_meu_avo carro_do_meu_avo.cor """ Explanation: Testes para praticar End of explanation """ def main(): carro1 = Carro('Brasília', 1968, 'amarela', 80) carro2 = Carro('Fuscão', 1981, 'preto', 95) carro1.acelere(40) carro2.acelere(50) carro1.acelere(80) carro1.pare() carro2.acelere(100) class Carro: def __init__(self, modelo, ano, cor, vel_max): self.modelo = modelo self.ano = ano self.cor = cor self.vel = 0 self.maxV = vel_max # velocidade máxima def imprima(self): if self.vel == 0: # parado dá para ver o ano print('%s %s %d' % (self.modelo, self.cor, self.ano)) elif self.vel < self.maxV: print('%s %s indo a %d km/h' % (self.modelo, self.cor, self.vel)) else: print('%s %s indo muito rapido!' % (self.modelo, self.cor)) def acelere(self, velocidade): self.vel = velocidade if self.vel > self.maxV: self.vel = self.maxV self.imprima() def pare(self): self.vel = 0 self.imprima() main() """ Explanation: POO – Programação Orientada a Objetos – Parte 2 End of explanation """ class Cafeteira: def __init__(self, marca, tipo, tamanho, cor): self.marca = marca self.tipo = tipo self.tamanho = tamanho self.cor = cor class Cachorro: def __init__(self, raça, idade, nome, cor): self.raça = raça self.idade = idade self.nome = nome self.cor = cor rex = Cachorro('vira-lata', 2, 'Bobby', 'marrom') 'vira-lata' == rex.raça rex.idade > 2 rex.idade == '2' rex.nome == 'rex' Bobby.cor == 'marrom' rex.cor == 'marrom' class Lista: def append(self, elemento): return "Oops! Este objeto não é uma lista" lista = [] a = Lista() b = a.append(7) lista.append(b) a b lista """ Explanation: TESTE PARA PRATICAR POO – Programação Orientada a Objetos – Parte 2 End of explanation """ import math class Bhaskara: def delta(self, a, b, c): return b ** 2 - 4 * a * c def main(self): a_digitado = float(input("Digite o valor de a:")) b_digitado = float(input("Digite o valor de b:")) c_digitado = float(input("Digite o valor de c:")) print(self.calcula_raizes(a_digitado, b_digitado, c_digitado)) def calcula_raizes(self, a, b, c): d = self.delta(self, a, b, c) if d == 0: raiz1 = (-b + math.sqrt(d)) / (2 * a) return 1, raiz1 # indica que tem uma raiz e o valor dela else: if d < 0: return 0 else: raiz1 = (-b + math.sqrt(d)) / (2 * a) raiz2 = (-b - math.sqrt(d)) / (2 * a) return 2, raiz1, raiz2 main() main() import Bhaskara class TestBhaskara: def testa_uma_raiz(self): b = Bhaskara.Bhaskara() assert b.calcula_raizes(1, 0, 0) == (1, 0) def testa_duas_raizes(self): b = Bhaskara.Bhaskara() assert b.calcula_raizes(1, -5, 6) == (2, 3, 2) def testa_zero_raizes(self): b = Bhaskara.Bhaskara() assert b.calcula_raizes(10, 10, 10) == 0 def testa_raiz_negativa(self): b = Bhaskara.Bhaskara() assert b.calcula_raizes(10, 20, 10) == (1, -1) """ Explanation: Códigos Testáveis End of explanation """ # Nos estudos ficou pytest_bhaskara.py import Bhaskara import pytest class TestBhaskara: @pytest.fixture def b(self): return Bhaskara.Bhaskara() def testa_uma_raiz(self, b): assert b.calcula_raizes(1, 0, 0) == (1, 0) def testa_duas_raizes(self, b): assert b.calcula_raizes(1, -5, 6) == (2, 3, 2) def testa_zero_raizes(self, b): assert b.calcula_raizes(10, 10, 10) == 0 def testa_raiz_negativa(self, b): assert b.calcula_raizes(10, 20, 10) == (1, -1) """ Explanation: Fixture: valor fixo para um conjunto de testes @pytest.fixture End of explanation """ def fatorial(n): if n < 0: return 0 i = fat = 1 while i <= n: fat = fat * i i += 1 return fat import pytest @pytest.mark.parametrize("entrada, esperado", [ (0, 1), (1, 1), (-10, 0), (4, 24), (5, 120) ]) def testa_fatorial(entrada, esperado): assert fatorial(entrada) == esperado """ Explanation: Parametrização End of explanation """ class Triangulo: def __init__(self, a, b, c): self.a = a self.b = b self.c = c def perimetro(self): return self.a + self.b + self.c t = Triangulo(1, 1, 1) t.a t.b t.c t.perimetro() """ Explanation: Exercícios Escreva uma versão do TestaBhaskara usando @pytest.mark.parametrize Escreva uma bateria de testes para o seu código preferido Tarefa de programação: Lista de exercícios - 3 Exercício 1: Uma classe para triângulos Defina a classe Triangulo cujo construtor recebe 3 valores inteiros correspondentes aos lados a, b e c de um triângulo. A classe triângulo também deve possuir um método perimetro, que não recebe parâmetros e devolve um valor inteiro correspondente ao perímetro do triângulo. t = Triangulo(1, 1, 1) deve atribuir uma referência para um triângulo de lados 1, 1 e 1 à variável t Um objeto desta classe deve responder às seguintes chamadas: t.a deve devolver o valor do lado a do triângulo t. b deve devolver o valor do lado b do triângulo t.c deve devolver o valor do lado c do triângulo t.perimetro() deve devolver um inteiro correspondente ao valor do perímetro do triângulo. End of explanation """ class Triangulo: def __init__(self, a, b, c): self.a = a self.b = b self.c = c def tipo_lado(self): if self.a == self.b and self.a == self.c: return 'equilátero' elif self.a != self.b and self.a != self.c and self.b != self.c: return 'escaleno' else: return 'isósceles' t = Triangulo(4, 4, 4) t.tipo_lado() u = Triangulo(3, 4, 5) u.tipo_lado() v = Triangulo(1, 3, 3) v.tipo_lado() t = Triangulo(5, 8, 5) t.tipo_lado() t = Triangulo(5, 5, 6) t.tipo_lado() ''' Exercício 1: Triângulos retângulos Escreva, na classe Triangulo, o método retangulo() que devolve True se o triângulo for retângulo, e False caso contrário. Exemplos: t = Triangulo(1, 3, 5) t.retangulo() # deve devolver False u = Triangulo(3, 4, 5) u.retangulo() # deve devolver True ''' class Triangulo: def __init__(self, a, b, c): self.a = a self.b = b self.c = c def retangulo(self): if self.a > self.b and self.a > self.c: if self.a ** 2 == self.b ** 2 + self.c ** 2: return True else: return False elif self.b > self.a and self.b > self.c: if self.b ** 2 == self.c ** 2 + self.a ** 2: return True else: return False else: if self.c ** 2 == self.a ** 2 + self.b ** 2: return True else: return False t = Triangulo(1, 3, 5) t.retangulo() t = Triangulo(3, 1, 5) t.retangulo() t = Triangulo(5, 1, 3) t.retangulo() u = Triangulo(3, 4, 5) u.retangulo() u = Triangulo(4, 5, 3) u.retangulo() u = Triangulo(5, 3, 4) u.retangulo() """ Explanation: Exercício 2: Tipos de triângulos Na classe triângulo, definida na Questão 1, escreva o metodo tipo_lado() que devolve uma string dizendo se o triângulo é: isóceles (dois lados iguais) equilátero (todos os lados iguais) escaleno (todos os lados diferentes) Note que se o triângulo for equilátero, a função não deve devolver isóceles. Exemplos: t = Triangulo(4, 4, 4) t.tipo_lado() deve devolver 'equilátero' u = Triangulo(3, 4, 5) .tipo_lado() deve devolver 'escaleno' End of explanation """ class Triangulo: ''' O resultado dos testes com seu programa foi: ***** [0.2 pontos]: Testando método semelhantes(Triangulo(3, 4, 5)) para Triangulo(3, 4, 5) - Falhou ***** TypeError: 'Triangulo' object is not iterable ***** [0.2 pontos]: Testando método semelhantes(Triangulo(3, 4, 5)) para Triangulo(6, 8, 10) - Falhou ***** TypeError: 'Triangulo' object is not iterable ***** [0.2 pontos]: Testando método semelhantes(Triangulo(6, 8, 10)) para Triangulo(3, 4, 5) - Falhou ***** TypeError: 'Triangulo' object is not iterable ***** [0.4 pontos]: Testando método semelhantes(Triangulo(3, 3, 3)) para Triangulo(3, 4, 5) - Falhou ***** TypeError: 'Triangulo' object is not iterable ''' def __init__(self, a, b, c): self.a = a self.b = b self.c = c # https://stackoverflow.com/questions/961048/get-class-that-defined-method def semelhantes(self, Triangulo): list1 = [] for arg in self: list1.append(arg) list2 = [] for arg in self1: list2.append(arg) for i in list2: print(i) t1 = Triangulo(2, 2, 2) t2 = Triangulo(4, 4, 4) t1.semelhantes(t2) """ Explanation: Exercício 2: Triângulos semelhantes Ainda na classe Triangulo, escreva um método semelhantes(triangulo) que recebe um objeto do tipo Triangulo como parâmetro e verifica se o triângulo atual é semelhante ao triângulo passado como parâmetro. Caso positivo, o método deve devolver True. Caso negativo, deve devolver False. Verifique a semelhança dos triângulos através do comprimento dos lados. Dica: você pode colocar os lados de cada um dos triângulos em uma lista diferente e ordená-las. Exemplo: t1 = Triangulo(2, 2, 2) t2 = Triangulo(4, 4, 4) t1.semelhantes(t2) deve devolver True ''' End of explanation """ def busca_sequencial(seq, x): '''(list, bool) -> bool''' for i in range(len(seq)): if seq[i] == x: return True return False # código com cara de C =\ list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] busca_sequencial(list, 3) list = ['casa', 'texto', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] busca_sequencial(list, 'texto') class Musica: def __init__(self, titulo, interprete, compositor, ano): self.titulo = titulo self.interprete = interprete self.compositor = compositor self.ano = ano class Buscador: def busca_por_titulo(self, playlist, titulo): for i in range(len(playlist)): if playlist[i].titulo == titulo: return i return -1 def vamos_buscar(self): playlist = [Musica("Ponta de Areia", "Milton Nascimento", "Milton Nascimento", 1975), Musica("Podres Poderes", "Caetano Veloso", "Caetano Veloso", 1984), Musica("Baby", "Gal Costa", "Caetano Veloso", 1969)] onde_achou = self.busca_por_titulo(playlist, "Baby") if onde_achou == -1: print("A música buscada não está na playlist") else: preferida = playlist[onde_achou] print(preferida.titulo, preferida.interprete, preferida.compositor, preferida.ano, sep = ', ') b = Buscador() b.vamos_buscar() """ Explanation: Week 4 Busca Sequencial End of explanation """ class Ordenador: def selecao_direta(self, lista): fim = len(lista) for i in range(fim - 1): # Inicialmente o menor elemento já visto é o i-ésimo posicao_do_minimo = i for j in range(i + 1, fim): if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor... posicao_do_minimo = j # ...substitui. # Coloca o menor elemento encontrado no início da sub-lista # Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i] lista = [10, 3, 8, -10, 200, 17, 32] o = Ordenador() o.selecao_direta(lista) lista lista_nomes = ['maria', 'carlos', 'wilson', 'ana'] o.selecao_direta(lista_nomes) lista_nomes import random print(random.randint(1, 10)) from random import shuffle x = [i for i in range(100)] shuffle(x) x o.selecao_direta(x) x def comprova_ordem(list): flag = True for i in range(len(list) - 1): if list[i] > list[i + 1]: flag = False return flag comprova_ordem(x) list = [1, 2, 3, 4, 5] list2 = [1, 3, 2, 4, 5] comprova_ordem(list) comprova_ordem(list2) def busca_sequencial(seq, x): for i in range(len(seq)): if seq[i] == x: return True return False def selecao_direta(lista): fim = len(lista) for i in range(fim-1): pos_menor = i for j in range(i+1,fim): if lista[j] < lista[pos_menor]: pos_menor = j lista[i],lista[pos_menor] = lista[pos_menor],lista[i] return lista numeros = [55,33,0,900,-432,10,77,2,11] """ Explanation: Complexidade Computacional Análise matemática do desempenho de um algoritmo Estudo analítico de: Quantas operações um algoritmo requer para que ele seja executado Quanto tempo ele vai demorar para ser executado Quanto de memória ele vai ocupar Análise da Busca Sequencial Exemplo: Lista telefônica de São Paulo, supondo 2 milhões de telefones fixos. Supondo que cada iteração do for comparação de string dure 1 milissegundo. Pior caso: 2000s = 33,3 minutos Caso médio (1 milhão): 1000s = 16,6 minutos Complexidade Computacional da Busca Sequencial Dada uma lista de tamanho n A complexidade computacional da busca sequencial é: n, no pior caso n/2, no caso médio Conclusão Busca sequencial é boa pois é bem simples Funciona bem quando a busca é feita num volume pequeno de dados Sua Complexidade Computacional é muito alta É muito lenta quando o volume de dados é grande Portanto, dizemos que é um algoritmo ineficiente Algoritmo de Ordenação Seleção Direta Seleção Direta A cada passo, busca pelo menor elemento do pedaço ainda não ordenado da lista e o coloca no início da lista No 1º passo, busca o menor elemento de todos e coloca na posição inicial da lista. No 2º passo, busca o 2º menor elemento da lista e coloca na 2ª posição da lista. No 3º passo, busca o 3º menor elemento da lista e coloca na 3ª posição da lista. Repete até terminar a lista End of explanation """ def ordenada(list): flag = True for i in range(len(list) - 1): if list[i] > list[i + 1]: flag = False return flag """ Explanation: Tarefa de programação: Lista de exercícios - 4 Exercício 1: Lista ordenada Escreva a função ordenada(lista), que recebe uma lista com números inteiros como parâmetro e devolve o booleano True se a lista estiver ordenada e False se a lista não estiver ordenada. End of explanation """ def busca(lista, elemento): for i in range(len(lista)): if lista[i] == elemento: return i return False busca(['a', 'e', 'i'], 'e') busca([12, 13, 14], 15) """ Explanation: Exercício 2: Busca sequencial Implemente a função busca(lista, elemento), que busca um determinado elemento em uma lista e devolve o índice correspondente à posição do elemento encontrado. Utilize o algoritmo de busca sequencial. Nos casos em que o elemento buscado não existir na lista, a função deve devolver o booleano False. busca(['a', 'e', 'i'], 'e') deve devolver => 1 busca([12, 13, 14], 15) deve devolver => False End of explanation """ def lista_grande(n): import random return random.sample(range(1, 1000), n) lista_grande(10) """ Explanation: Praticar tarefa de programação: Exercícios adicionais (opcionais) Exercício 1: Gerando listas grandes Escreva a função lista_grande(n), que recebe como parâmetro um número inteiro n e devolve uma lista contendo n números inteiros aleatórios. End of explanation """ def ordena(lista): fim = len(lista) for i in range(fim - 1): min = i for j in range(i + 1, fim): if lista[j] < lista[min]: min = j lista[i], lista[min] = lista[min], lista[i] return lista lista = [10, 3, 8, -10, 200, 17, 32] ordena(lista) lista """ Explanation: Exercício 2: Ordenação com selection sort Implemente a função ordena(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo selection sort. End of explanation """ class Ordenador: def selecao_direta(self, lista): fim = len(lista) for i in range(fim - 1): # Inicialmente o menor elemento já visto é o i-ésimo posicao_do_minimo = i for j in range(i + 1, fim): if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor... posicao_do_minimo = j # ...substitui. # Coloca o menor elemento encontrado no início da sub-lista # Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i] def bolha(self, lista): fim = len(lista) for i in range(fim - 1, 0, -1): for j in range(i): if lista[j] > lista[j + 1]: lista[j], lista[j + 1] = lista[j + 1], lista[j] """ Explanation: Week 5 - Algoritmo de Ordenação da Bolha - Bubblesort Lista como um tubo de ensaio vertical, os elementos mais leves sobem à superfície como uma bolha, os mais pesados afundam. Percorre a lista múltiplas vezes; a cada passagem, compara todos os elementos adjacentes e troca de lugar os que estiverem fora de ordem End of explanation """ lista = [10, 3, 8, -10, 200, 17, 32] o = Ordenador() o.bolha(lista) lista """ Explanation: Exemplo do algoritmo bubblesort em ação: Inicial: 5 1 7 3 2 1 5 7 3 2 1 5 3 7 2 1 5 3 2 7 (fim da primeira iteração) 1 3 5 2 7 1 3 2 5 7 (fim da segunda iteração) 1 2 3 5 7 End of explanation """ class Ordenador: def selecao_direta(self, lista): fim = len(lista) for i in range(fim - 1): # Inicialmente o menor elemento já visto é o i-ésimo posicao_do_minimo = i for j in range(i + 1, fim): if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor... posicao_do_minimo = j # ...substitui. # Coloca o menor elemento encontrado no início da sub-lista # Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i] def bolha(self, lista): fim = len(lista) for i in range(fim - 1, 0, -1): for j in range(i): if lista[j] > lista[j + 1]: lista[j], lista[j + 1] = lista[j + 1], lista[j] import random import time class ContaTempos: def lista_aleatoria(self, n): # n = número de elementos da lista from random import randrange lista = [0 for x in range(n)] # lista com n elementos, todos sendo zero for i in range(n): lista[i] = random.randrange(1000) # inteiros entre 0 e 999 return lista def compara(self, n): lista1 = self.lista_aleatoria(n) lista2 = lista1 o = Ordenador() antes = time.time() o.bolha(lista1) depois = time.time() print("Bolha demorou", depois - antes, "segundos") antes = time.time() o.selecao_direta(lista2) depois = time.time() print("Seleção direta demorou", depois - antes, "segundos") c = ContaTempos() c.compara(1000) print("Diferença de", 0.16308164596557617 - 0.05245494842529297) c.compara(5000) """ Explanation: Comparação de Desempenho Módulo time: função time() devolve o tempo decorrido (em segundos) desde 1/1/1970 (no Unix) Para medir um intervalo de tempo import time antes = time.time() algoritmo_a_ser_cronometrado() depois = time.time() print("A execução do algoritmo demorou ", depois - antes, "segundos") End of explanation """ class Ordenador: def selecao_direta(self, lista): fim = len(lista) for i in range(fim - 1): # Inicialmente o menor elemento já visto é o i-ésimo posicao_do_minimo = i for j in range(i + 1, fim): if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor... posicao_do_minimo = j # ...substitui. # Coloca o menor elemento encontrado no início da sub-lista # Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i] def bolha(self, lista): fim = len(lista) for i in range(fim - 1, 0, -1): for j in range(i): if lista[j] > lista[j + 1]: lista[j], lista[j + 1] = lista[j + 1], lista[j] def bolha_curta(self, lista): fim = len(lista) for i in range(fim - 1, 0, -1): trocou = False for j in range(i): if lista[j] > lista[j + 1]: lista[j], lista[j + 1] = lista[j + 1], lista[j] trocou = True if not trocou: # que é igual a if trocou == False return import random import time class ContaTempos: def lista_aleatoria(self, n): # n = número de elementos da lista from random import randrange lista = [random.randrange(1000) for x in range(n)] # lista com n elementos, todos sendo aleatórios de 0 a 999 return lista def lista_quase_ordenada(self, n): lista = [x for x in range(n)] # lista ordenada lista[n//10] = -500 # localizou o -500 no primeiro décimo da lista return lista def compara(self, n): lista1 = self.lista_aleatoria(n) lista2 = lista1 lista3 = lista2 o = Ordenador() print("Comparando lista aleatórias") antes = time.time() o.bolha(lista1) depois = time.time() print("Bolha demorou", depois - antes, "segundos") antes = time.time() o.selecao_direta(lista2) depois = time.time() print("Seleção direta demorou", depois - antes, "segundos") antes = time.time() o.bolha_curta(lista3) depois = time.time() print("Bolha otimizada", depois - antes, "segundos") print("\nComparando lista quase ordenadas") lista1 = self.lista_quase_ordenada(n) lista2 = lista1 lista3 = lista2 antes = time.time() o.bolha(lista1) depois = time.time() print("Bolha demorou", depois - antes, "segundos") antes = time.time() o.selecao_direta(lista2) depois = time.time() print("Seleção direta demorou", depois - antes, "segundos") antes = time.time() o.bolha_curta(lista3) depois = time.time() print("Bolha otimizada", depois - antes, "segundos") c = ContaTempos() c.compara(1000) c.compara(5000) """ Explanation: Melhoria no Algoritmo de Ordenação da Bolha Percorre a lista múltiplas vezes; a cada passagem, compara todos os elementos adjacentes e troca de lugar os que estiverem fora de ordem. Melhoria: se em uma das iterações, nenhuma troca é realizada, isso significa que a lista já está ordenada e podemos finalizar o algoritmo. End of explanation """ class Ordenador: def selecao_direta(self, lista): fim = len(lista) for i in range(fim - 1): posicao_do_minimo = i for j in range(i + 1, fim): if lista[j] < lista[posicao_do_minimo]: posicao_do_minimo = j lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i] def bolha(self, lista): fim = len(lista) for i in range(fim - 1, 0, -1): for j in range(i): if lista[j] > lista[j + 1]: lista[j], lista[j + 1] = lista[j + 1], lista[j] def bolha_curta(self, lista): fim = len(lista) for i in range(fim - 1, 0, -1): trocou = False for j in range(i): if lista[j] > lista[j + 1]: lista[j], lista[j + 1] = lista[j + 1], lista[j] trocou = True if not trocou: return import random import time class ContaTempos: def lista_aleatoria(self, n): from random import randrange lista = [random.randrange(1000) for x in range(n)] return lista def lista_quase_ordenada(self, n): lista = [x for x in range(n)] lista[n//10] = -500 return lista import pytest class TestaOrdenador: @pytest.fixture def o(self): return Ordenador() @pytest.fixture def l_quase(self): c = ContaTempos() return c.lista_quase_ordenada(100) @pytest.fixture def l_aleatoria(self): c = ContaTempos() return c.lista_aleatoria(100) def esta_ordenada(self, l): for i in range(len(l) - 1): if l[i] > l[i+1]: return False return True def test_bolha_curta_aleatoria(self, o, l_aleatoria): o.bolha_curta(l_aleatoria) assert self.esta_ordenada(l_aleatoria) def test_selecao_direta_aleatoria(self, o, l_aleatoria): o.selecao_direta(l_aleatoria) assert self.esta_ordenada(l_aleatoria) def test_bolha_curta_quase(self, o, l_quase): o.bolha_curta(l_quase) assert self.esta_ordenada(l_quase) def test_selecao_direta_quase(self, o, l_quase): o.selecao_direta(l_quase) assert self.esta_ordenada(l_quase) [5, 2, 1, 3, 4] 2 5 1 3 4 2 1 5 3 4 2 1 3 5 4 2 1 3 4 5 [2, 3, 4, 5, 1] 2 3 4 1 5 2 3 1 4 5 2 1 3 4 5 1 2 3 4 5 """ Explanation: Site com algoritmos de ordenação http://nicholasandre.com.br/sorting/ Testes automatizados dos algoritmos de ordenação End of explanation """ class Buscador: def busca_por_titulo(self, playlist, titulo): for i in range(len(playlist)): if playlist[i].titulo == titulo: return i return -1 def busca_binaria(self, lista, x): primeiro = 0 ultimo = len(lista) - 1 while primeiro <= ultimo: meio = (primeiro + ultimo) // 2 if lista[meio] == x: return meio else: if x < lista[meio]: # busca na primeira metade da lista ultimo = meio - 1 # já foi visto que não está no elemento meio, então vai um a menos else: primeiro = meio + 1 return -1 lista = [-100, 0, 20, 30, 50, 100, 3000, 5000] b = Buscador() b.busca_binaria(lista, 30) """ Explanation: Busca Binária Objetivo: localizar o elemento x em uma lista Considere o elemento m do meio da lista se x == m ==> encontrou! se x < m ==> procure apenas na 1ª metade (da esquerda) se x > m ==> procure apenas na 2ª metade (da direita), repetir o processo até que o x seja encontrado ou que a sub-lista em questão esteja vazia End of explanation """ def busca(lista, elemento): primeiro = 0 ultimo = len(lista) - 1 while primeiro <= ultimo: meio = (primeiro + ultimo) // 2 if lista[meio] == elemento: print(meio) return meio else: if elemento < lista[meio]: # busca na primeira metade da lista ultimo = meio - 1 # já foi visto que não está no elemento meio, então vai um a menos print(meio) # função deve imprimir cada um dos índices testados pelo algoritmo. else: primeiro = meio + 1 print(meio) return False busca(['a', 'e', 'i'], 'e') busca([1, 2, 3, 4, 5], 6) busca([1, 2, 3, 4, 5, 6], 4) """ Explanation: Complexidade da Busca Binária Dado uma lista de n elementos No pior caso, teremos que efetuar: $$log_2n$$ comparações No exemplo da lista telefônica (com 2 milhões de números): $$log_2(2 milhões) = 20,9$$ Portanto: resposta em menos de 21 milissegundos! Conclusão Busca Binária é um algoritmo bastante eficiente Ao estudar a eficiência de um algoritmo é interessante: Analisar a complexidade computacional Realizar experimentos medindo o desempenho Tarefa de programação: Lista de exercícios - 5 Exercício 1: Busca binária Implemente a função busca(lista, elemento), que busca um determinado elemento em uma lista e devolve o índice correspondente à posição do elemento encontrado. Utilize o algoritmo de busca binária. Nos casos em que o elemento buscado não existir na lista, a função deve devolver o booleano False. Além de devolver o índice correspondente à posição do elemento encontrado, sua função deve imprimir cada um dos índices testados pelo algoritmo. Exemplo: busca(['a', 'e', 'i'], 'e') 1 deve devolver => 1 busca([1, 2, 3, 4, 5], 6) 2 3 4 deve devolver => False busca([1, 2, 3, 4, 5, 6], 4) 2 4 3 deve devolver => 3 End of explanation """ def bubble_sort(lista): fim = len(lista) for i in range(fim - 1, 0, -1): for j in range(i): if lista[j] > lista[j + 1]: lista[j], lista[j + 1] = lista[j + 1], lista[j] print(lista) print(lista) return lista bubble_sort([5, 1, 4, 2, 8]) #[1, 4, 2, 5, 8] #[1, 2, 4, 5, 8] #[1, 2, 4, 5, 8] #deve devolver [1, 2, 4, 5, 8] bubble_sort([1, 3, 4, 2, 0, 5]) #Esperado: #[1, 3, 2, 0, 4, 5] #[1, 2, 0, 3, 4, 5] #[1, 0, 2, 3, 4, 5] #[0, 1, 2, 3, 4, 5] #[0, 1, 2, 3, 4, 5] #O resultado dos testes com seu programa foi: #***** [0.6 pontos]: Verificando funcionamento do bubble sort - Falhou ***** #AssertionError: Expected #[1, 3, 4, 2, 0, 5] #[1, 3, 2, 0, 4, 5] #[1, 2, 0, 3, 4, 5] #[1, 0, 2, 3, 4, 5] #[0, 1, 2, 3, 4, 5] #[0, 1, 2, 3, 4, 5] # but got #[1, 3, 4, 2, 0, 5] #[1, 3, 2, 0, 4, 5] #[1, 2, 0, 3, 4, 5] #[1, 0, 2, 3, 4, 5] #[0, 1, 2, 3, 4, 5] """ Explanation: Exercício 2: Ordenação com bubble sort Implemente a função bubble_sort(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo bubble sort. Além de devolver uma lista ordenada, sua função deve imprimir os resultados parciais da ordenação ao fim de cada iteração do algoritmo ao longo da lista. Observe que, como a última iteração do algoritmo apenas verifica que a lista está ordenada, o último resultado deve ser impresso duas vezes. Portanto, se seu algoritmo precisa de duas passagens para ordenar a lista, e uma terceira para verificar que a lista está ordenada, 3 resultados parciais devem ser impressos. bubble_sort([5, 1, 4, 2, 8]) [1, 4, 2, 5, 8] [1, 2, 4, 5, 8] [1, 2, 4, 5, 8] deve devolver [1, 2, 4, 5, 8] End of explanation """ def insertion_sort(lista): fim = len(lista) for i in range(fim - 1): posicao_do_minimo = i for j in range(i + 1, fim): if lista[j] < lista[posicao_do_minimo]: posicao_do_minimo = j lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i] return lista """ Explanation: Praticar tarefa de programação: Exercício adicional (opcional) Exercício 1: Ordenação com insertion sort Implemente a função insertion_sort(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo insertion sort. End of explanation """ def fatorial(n): if n <= 1: # base da recursão return 1 else: return n * fatorial(n - 1) # chamada recursiva import pytest @pytest.mark.parametrize("entrada, esperado", [ (0, 1), (1, 1), (2, 2), (3, 6), (4, 24), (5, 120) ]) def testa_fatorial(entrada, esperado): assert fatorial(entrada) == esperado #fatorial.py def fatorial(n): if n <= 1: # base da recursão return 1 else: return n * fatorial(n - 1) # chamada recursiva import pytest @pytest.mark.parametrize("entrada, esperado", [ (0, 1), (1, 1), (2, 2), (3, 6), (4, 24), (5, 120) ]) def testa_fatorial(entrada, esperado): assert fatorial(entrada) == esperado # fibonacci.py # Fn = 0 if n = 0 # Fn = 1 if n = 1 # Fn+1 + Fn-2 if n > 1 def fibonacci(n): if n < 2: return n else: return fibonacci(n - 1) + fibonacci(n - 2) import pytest @pytest.mark.parametrize("entrada, esperado", [ (0, 0), (1, 1), (2, 1), (3, 2), (4, 3), (5, 5), (6, 8), (7, 13) ]) def testa_fibonacci(entrada, esperado): assert fibonacci(entrada) == esperado # busca binária def busca_binaria(lista, elemento, min = 0, max = None): if max == None: # se nada for passado, o tamanho máximo é o tamanho da lista max = len(lista) - 1 if max < min: # situação que não encontrou o elemento return False else: meio = min + (max - min) // 2 if lista[meio] > elemento: return busca_binaria(lista, elemento, min, meio - 1) elif lista[meio] < elemento: return busca_binaria(lista, elemento, meio + 1, max) else: return meio a = [10, 20, 30, 40, 50, 60] import pytest @pytest.mark.parametrize("lista, valor, esperado", [ (a, 10, 0), (a, 20, 1), (a, 30, 2), (a, 40, 3), (a, 50, 4), (a, 60, 5), (a, 70, False), (a, 70, False), (a, 15, False), (a, -10, False) ]) def testa_busca_binaria(lista, valor, esperado): assert busca_binaria(lista, valor) == esperado """ Explanation: Week 6 Recursão (Definição. Como resolver um problema recursivo. Exemplos. Implementações.) End of explanation """ def merge_sort(lista): if len(lista) <= 1: return lista meio = len(lista) // 2 lado_esquerdo = merge_sort(lista[:meio]) lado_direito = merge_sort(lista[meio:]) return merge(lado_esquerdo, lado_direito) # intercala os dois lados def merge(lado_esquerdo, lado_direito): if not lado_esquerdo: # se o lado esquerdo for uma lista vazia... return lado_direito if not lado_direito: # se o lado direito for uma lista vazia... return lado_esquerdo if lado_esquerdo[0] < lado_direito[0]: # compara o primeiro elemento da posição do lado esquerdo com o primeiro do lado direito return [lado_esquerdo[0]] + merge(lado_esquerdo[1:], lado_direito) # merge(lado_esquerdo[1:]) ==> pega o lado esquerdo, menos o primeiro elemento return [lado_direito[0]] + merge(lado_esquerdo, lado_direito[1:]) """ Explanation: Mergesort Ordenação por Intercalação: Divida a lista na metade recursivamente, até que cada sublista contenha apenas 1 elemento (portanto, já ordenada). Repetidamente, intercale as sublistas para produzir novas listas ordenadas. Repita até que tenhamos apenas 1 lista no final (que estará ordenada). Ex: 6 5 3 1 8 7 2 4 5 6&nbsp;&nbsp;&nbsp;&nbsp;1 3&nbsp;&nbsp;&nbsp;&nbsp;7 8&nbsp;&nbsp;&nbsp;&nbsp;2 4 1 3 5 6&nbsp;&nbsp;&nbsp;&nbsp;2 4 7 8 1 2 3 4 5 6 7 8 End of explanation """ def x(n): if n == 0: #<espaço A> print(n) else: #<espaço B> x(n-1) print(n) #<espaço C> #<espaço D> #<espaço E> x(10) def x(n): if n >= 0 or n <= 2: print(n) # return n else: print(n-1) print(n-2) print(n-3) #return x(n-1) + x(n-2) + x(n-3) print(x(6)) def busca_binaria(lista, elemento, min=0, max=None): if max == None: max = len(lista)-1 if max < min: return False else: meio = min + (max-min)//2 print(lista[meio]) if lista[meio] > elemento: return busca_binaria(lista, elemento, min, meio - 1) elif lista[meio] < elemento: return busca_binaria(lista, elemento, meio + 1, max) else: return meio a = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934] a busca_binaria(a, 99) """ Explanation: Base da recursão é a condição que faz o problema ser definitivamente resolvido. Caso essa condição, essa base da recursão, não seja satisfeita, o problema continua sendo reduzido em instâncias menores até que a condição passe a ser satisfeita. Chamada recursiva é a linha onde a função faz uma chamada a ela mesma. Função recursiva é a função que chama ela mesma. A linha 2 tem a condição que é a base da recursão A linha 5 tem a chamada recursiva Para o algoritmo funcionar corretamente, é necessário trocar a linha 3 por “return 1” if (n < 2): if (n <= 1): No <espaço A> e no <espaço C> looping infinito Resultado: 6. Chamadas recursivas: nenhuma. Resultado: 20. Chamadas recursivas: 24 1 End of explanation """ def soma_lista_tradicional_way(lista): soma = 0 for i in range(len(lista)): soma += lista[i] return soma a = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934] soma_lista_tradicional_way(a) b = [-10, -2, 0, 5] soma_lista_tradicional_way(b) def soma_lista(lista): if len(lista) == 1: return lista[0] else: return lista[0] + soma_lista(lista[1:]) a = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934] soma_lista(a) # retorna 2952 b = [-10, -2, 0, 5] soma_lista(b) """ Explanation: Tarefa de programação: Lista de exercícios - 6 Exercício 1: Soma dos elementos de uma lista Implemente a função soma_lista(lista), que recebe como parâmetro uma lista de números inteiros e devolve um número inteiro correspondente à soma dos elementos desta lista. Sua solução deve ser implementada utilizando recursão. End of explanation """ def encontra_impares_tradicional_way(lista): lista_impares = [] for i in lista: if i % 2 != 0: # é impar! lista_impares.append(i) return lista_impares a = [5, 66, 77, 99, 102, 239, 567, 875, 934] encontra_impares_tradicional_way(a) b = [2, 5, 34, 66, 100, 102, 999] encontra_impares_tradicional_way(b) stack = ['a','b'] stack.extend(['g','h']) stack def encontra_impares(lista): if len(lista) == 0: return [] if lista[0] % 2 != 0: # se o elemento é impar return [lista[0]] + encontra_impares(lista[1:]) else: return encontra_impares(lista[1:]) a = [5, 66, 77, 99, 102, 239, 567, 875, 934] encontra_impares(a) encontra_impares([5]) encontra_impares([1, 2, 3]) encontra_impares([2, 4, 6, 8]) encontra_impares([9]) encontra_impares([4, 11]) encontra_impares([2, 10, 20, 7, 30, 12, 6, 6]) encontra_impares([]) encontra_impares([4, 331, 1001, 4]) """ Explanation: Exercício 2: Encontrando ímpares em uma lista Implemente a função encontra_impares(lista), que recebe como parâmetro uma lista de números inteiros e devolve uma outra lista apenas com os números ímpares da lista dada. Sua solução deve ser implementada utilizando recursão. Dica: você vai precisar do método extend() que as listas possuem. End of explanation """ def incomodam(n): if type(n) != int or n <= 0: return '' else: s1 = 'incomodam ' return s1 + incomodam(n - 1) incomodam('-1') incomodam(2) incomodam(3) incomodam(8) incomodam(-3) incomodam(1) incomodam(7) def incomodam(n): if type(n) != int or n <= 0: return '' else: s1 = 'incomodam ' return s1 + incomodam(n - 1) def elefantes(n): if type(n) != int or n <= 0: return '' if n == 1: return "Um elefante incomoda muita gente" else: return elefantes(n - 1) + str(n) + " elefantes " + incomodam(n) + ("muita gente" if n % 2 > 0 else "muito mais") + "\r\n" elefantes(1) print(elefantes(3)) elefantes(2) elefantes(3) print(elefantes(4)) type(str(3)) def incomodam(n): if type(n) != int or n < 0: return '' else: return print('incomodam ' * n) def elefantes(n): texto_inicial = 'Um elefante incomoda muita gente\n' texto_posterior1 = '%d elefantes ' + incomodam(n) + 'muito mais\n\n' texto_posterior2 = 'elefantes ' + incomodam(n) + 'muita gente\n' if n == 1: return print(texto_inicial) else: return print(texto_inicial) + print(texto_posterior1) elefantes(1) elefantes(2) """ Explanation: Exercício 3: Elefantes Este exercício tem duas partes: Implemente a função incomodam(n) que devolve uma string contendo "incomodam " (a palavra seguida de um espaço) n vezes. Se n não for um inteiro estritamente positivo, a função deve devolver uma string vazia. Essa função deve ser implementada utilizando recursão. Utilizando a função acima, implemente a função elefantes(n) que devolve uma string contendo a letra de "Um elefante incomoda muita gente..." de 1 até n elefantes. Se n não for maior que 1, a função deve devolver uma string vazia. Essa função também deve ser implementada utilizando recursão. Observe que, para um elefante, você deve escrever por extenso e no singular ("Um elefante..."); para os demais, utilize números e o plural ("2 elefantes..."). Dica: lembre-se que é possível juntar strings com o operador "+". Lembre-se também que é possível transformar números em strings com a função str(). Dica: Será que neste caso a base da recursão é diferente de n==1? Por exemplo, uma chamada a elefantes(4) deve devolver uma string contendo: Um elefante incomoda muita gente 2 elefantes incomodam incomodam muito mais 2 elefantes incomodam incomodam muita gente 3 elefantes incomodam incomodam incomodam muito mais 3 elefantes incomodam incomodam incomodam muita gente 4 elefantes incomodam incomodam incomodam incomodam muito mais End of explanation """
AllenDowney/ModSim
soln/chap07.ipynb
gpl-2.0
# install Pint if necessary try: import pint except ImportError: !pip install pint # download modsim.py if necessary from os.path import exists filename = 'modsim.py' if not exists(filename): from urllib.request import urlretrieve url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/' local, _ = urlretrieve(url+filename, filename) print('Downloaded ' + local) # import functions from modsim from modsim import * """ Explanation: Chapter 7 Modeling and Simulation in Python Copyright 2021 Allen Downey License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International End of explanation """ import os filename = 'World_population_estimates.html' if not os.path.exists(filename): !wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/data/World_population_estimates.html from pandas import read_html tables = read_html(filename, header=0, index_col=0, decimal='M') table2 = tables[2] table2.columns = ['census', 'prb', 'un', 'maddison', 'hyde', 'tanton', 'biraben', 'mj', 'thomlinson', 'durand', 'clark'] un = table2.un / 1e9 census = table2.census / 1e9 """ Explanation: In the previous chapter we developed a population model where net growth during each time step is proportional to the current population. This model seems more realistic than the constant growth model, but it does not fit the data as well. There are a few things we could try to improve the model: Maybe net growth depends on the current population, but the relationship is quadratic, not linear. Maybe the net growth rate varies over time. In this chapter, we'll explore the first option. In the exercises, you will have a chance to try the second. Here's the code that reads the data. End of explanation """ from modsim import TimeSeries def run_simulation(system, growth_func): """Simulate the system using any update function. system: System object growth_func: function that computes the population next year returns: TimeSeries """ results = TimeSeries() results[system.t_0] = system.p_0 for t in range(system.t_0, system.t_end): growth = growth_func(results[t], t, system) results[t+1] = results[t] + growth return results from modsim import decorate def plot_estimates(): census.plot(style=':', label='US Census') un.plot(style='--', label='UN DESA') decorate(xlabel='Year', ylabel='World population (billion)') """ Explanation: And here are the functions from the previous chapter. End of explanation """ def growth_func_quad(pop, t, system): return system.alpha * pop + system.beta * pop**2 """ Explanation: Quadratic growth It makes sense that net growth should depend on the current population, but maybe it's not a linear relationship, like this: net_growth = system.alpha * pop Maybe it's a quadratic relationship, like this: net_growth = system.alpha * pop + system.beta * pop**2 We can test that conjecture with a new update function: End of explanation """ from modsim import System t_0 = census.index[0] p_0 = census[t_0] t_end = census.index[-1] system = System(t_0=t_0, p_0=p_0, t_end=t_end) """ Explanation: Here's the System object we'll use, initialized with t_0, p_0, and t_end. End of explanation """ system.alpha = 25 / 1000 system.beta = -1.8 / 1000 """ Explanation: Now we have to add the parameters alpha and beta . I chose the following values by trial and error; we'll see better ways to do it later. End of explanation """ results = run_simulation(system, growth_func_quad) """ Explanation: And here's how we run it: End of explanation """ results.plot(color='gray', label='model') plot_estimates() decorate(title='Quadratic Growth Model') """ Explanation: And here are the results. End of explanation """ from numpy import linspace pop_array = linspace(0, 15, 101) """ Explanation: The model fits the data well over the whole range, with just a bit of space between them in the 1960s. Of course, we should expect the quadratic model to fit better than the constant and proportional models because it has two parameters we can choose, where the other models have only one. In general, the more parameters you have to play with, the better you should expect the model to fit. But fitting the data is not the only reason to think the quadratic model might be a good choice. It also makes sense; that is, there is a legitimate reason to expect the relationship between growth and population to have this form. To understand it, let's look at net growth as a function of population. Net growth Let's plot the relationship between growth and population in the quadratic model. I'll use linspace to make an array of 101 populations from 0 to 15 billion. End of explanation """ growth_array = (system.alpha * pop_array + system.beta * pop_array**2) """ Explanation: Now I'll use the quadratic model to compute net growth for each population. End of explanation """ from matplotlib.pyplot import plot """ Explanation: To plot the growth rate versus population, we can import the plot function from Matplotlib: End of explanation """ plot(pop_array, growth_array, label='growth') decorate(xlabel='Population (billions)', ylabel='Net growth (billions)', title='Growth vs. Population') """ Explanation: And use it like this. End of explanation """ -system.alpha / system.beta """ Explanation: Note that the x-axis is not time, as in the previous figures, but population. We can divide this curve into four regimes of behavior: When the population is less than 3-4 billion, net growth is proportional to population, as in the proportional model. In this regime, the population grows slowly because the population is small. Between 4 billion and 10 billion, the population grows quickly because there are a lot of people. Above 10 billion, population grows more slowly; this behavior models the effect of resource limitations that decrease birth rates or increase death rates. Above 14 billion, resources are so limited that the death rate exceeds the birth rate and net growth becomes negative. Just below 14 billion, there is a point where net growth is 0, which means that the population does not change. At this point, the birth and death rates are equal, so the population is in equilibrium. Equilibrium To find the equilibrium point, we can find the roots, or zeros, of this equation: $$\Delta p = \alpha p + \beta p^2$$ where $\Delta p$ is net population growth, $p$ is current population, and $\alpha$ and $\beta$ are the parameters of the model. We can rewrite the right hand side like this: $$\Delta p = p (\alpha + \beta p)$$ which is $0$ when $p=0$ or $p=-\alpha/\beta$. We can use the parameters of the system to compute the second equilibrium point: End of explanation """ def carrying_capacity(system): K = -system.alpha / system.beta return K sys1 = System(alpha=0.025, beta=-0.0018) pop = carrying_capacity(sys1) print(pop) """ Explanation: With these parameters, net growth is 0 when the population is about 13.9 billion. In the context of population modeling, the quadratic model is more conventionally written like this: $$\Delta p = r p (1 - p / K)$$ This is the same model; it's just a different way to parameterize it. Given $\alpha$ and $\beta$, we can compute $r=\alpha$ and $K=-\alpha/\beta$. In this version, it is easier to interpret the parameters: $r$ is the maximum growth rate, observed when $p$ is small, and $K$ is the equilibrium point. $K$ is also called the carrying capacity, since it indicates the maximum population the environment can sustain. Summary In this chapter we implemented a quadratic growth model where net growth depends on the current population and the population squared. This model fits the data well, and we saw one reason why: it is based on the assumption that there is a limit to the number of people the Earth can support. In the next chapter we'll use the models we have developed to generate predictions. But first, I want to warn you about a few things that can go wrong when you write functions. Dysfunctions When people learn about functions, there are a few things they often find confusing. In this section I present and explain some common problems. As an example, suppose you want a function that takes a System object, with variables alpha and beta, and computes the carrying capacity, -alpha/beta. Here's a good solution: End of explanation """ def carrying_capacity(): K = -sys1.alpha / sys1.beta return K sys1 = System(alpha=0.025, beta=-0.0018) pop = carrying_capacity() print(pop) """ Explanation: Now let's see all the ways that can go wrong. Dysfunction #1: Not using parameters. In the following version, the function doesn't take any parameters; when sys1 appears inside the function, it refers to the object we create outside the function. End of explanation """ # WRONG def carrying_capacity(system): system = System(alpha=0.025, beta=-0.0018) K = -system.alpha / system.beta return K sys1 = System(alpha=0.03, beta=-0.002) pop = carrying_capacity(sys1) print(pop) """ Explanation: This version actually works, but it is not as versatile as it could be. If there are several System objects, this function can only work with one of them, and only if it is named sys1. Dysfunction #2: Clobbering the parameters. When people first learn about parameters, they often write functions like this: End of explanation """ # WRONG def carrying_capacity(system): K = -system.alpha / system.beta sys1 = System(alpha=0.025, beta=-0.0018) pop = carrying_capacity(sys1) print(pop) """ Explanation: In this example, we have a System object named sys1 that gets passed as an argument to carrying_capacity. But when the function runs, it ignores the argument and immediately replaces it with a new System object. As a result, this function always returns the same value, no matter what argument is passed. When you write a function, you generally don't know what the values of the parameters will be. Your job is to write a function that works for any valid values. If you assign your own values to the parameters, you defeat the whole purpose of functions. Dysfunction #3: No return value. Here's a version that computes the value of K but doesn't return it. End of explanation """ # Solution system.r = system.alpha system.K = -system.alpha/system.beta system.r, system.K # Solution def growth_func_quad2(pop, t, system): return system.r * pop * (1 - pop / system.K) # Solution results2 = run_simulation(system, growth_func_quad2) results2.plot(color='gray', label='model') plot_estimates() decorate(title='Quadratic Growth Model, alternate parameters') """ Explanation: A function that doesn't have a return statement actually returns a special value called None, so in this example the value of pop is None. If you are debugging a program and find that the value of a variable is None when it shouldn't be, a function without a return statement is a likely cause. Dysfunction #4: Ignoring the return value. Finally, here's a version where the function is correct, but the way it's used is not. ``` def carrying_capacity(system): K = -system.alpha / system.beta return K sys1 = System(alpha=0.025, beta=-0.0018) carrying_capacity(sys1) print(K) ``` In this example, carrying_capacity runs and returns K, but the return value doesn't get displayed or assigned to a variable. If we try to print K, we get a NameError, because K only exists inside the function. When you call a function that returns a value, you should do something with the result. Exercises Exercise: In a previous section, we saw a different way to parameterize the quadratic model: $$ \Delta p = r p (1 - p / K) $$ where $r=\alpha$ and $K=-\alpha/\beta$. Write a version of growth_func that implements this version of the model. Test it by computing the values of r and K that correspond to alpha=0.025, beta=-0.0018, and confirm that you get the same results. End of explanation """ # Solution p0_array = linspace(1, 25, 11) for p_0 in p0_array: system.p_0 = p_0 results3 = run_simulation(system, growth_func_quad) results3.plot(label='_nolegend') decorate(xlabel='Year', ylabel='Population (billions)', title='Projections with hypothetical starting populations') """ Explanation: Exercise: What happens if we start with an initial population above the carrying capacity, like 20 billion? Run the model with initial populations between 1 and 20 billion, and plot the results on the same axes. Hint: If there are too many labels in the legend, you can plot results like this: results.plot(label='_nolegend') End of explanation """
eric11/PMCnetworks
1-scape-parse/parser-postprocessing.ipynb
mit
import sqlite3 conn = sqlite3.connect('pmcv2-full.db') c = conn.cursor() c.execute('''CREATE INDEX pmidix ON refs(pmid)''') c.execute('''CREATE INDEX pmcidix ON pmcidmap(pmid)''') c.execute('''CREATE INDEX metaix ON meta(pmid)''') c.execute('''CREATE INDEX authorsix ON authors(pmid)''') c.execute('''CREATE INDEX keywordsix ON keywords(pmid)''') c.execute('''CREATE INDEX abstractsix ON abstracts(pmid)''') #c.execute('''CREATE INDEX tfidfix ON tfidf(pmid)''') c.execute('''COMMIT''') c.close() """ Explanation: create indices for faster lookups End of explanation """ import sqlite3 conn = sqlite3.connect('pmcv2-full.db') c = conn.cursor() #c.execute('''DROP TABLE authors2''') c.execute('''CREATE TABLE authors2 (pmid integer, authnum integer, fn text, ln text, afil text, abbr text)''') c.execute('''SELECT * FROM authors''') authtab = c.fetchall() for entry in authtab: authorabbr = (entry[2]+entry[3]).replace(" ", "").lower() c.execute("INSERT INTO authors2 (pmid, authnum, fn, ln, afil, abbr) VALUES (?, ?, ?, ?, ?, ?)", (entry[0], entry[1], entry[2], entry[3], entry[4], authorabbr)) c.execute('''COMMIT''') c.execute('''DROP TABLE authors''') c.execute('''ALTER TABLE authors2 RENAME TO authors''') c.execute('''CREATE INDEX authorsabbrix ON authors(abbr)''') c.execute('''CREATE INDEX authorsix ON authors(pmid)''') c.execute('''COMMIT''') c.close() """ Explanation: add author abbreviations to author table, create index for this, to allow generation of per-author statistics (this will map author abbreviations to their publication PMIDs) End of explanation """ import sqlite3 conn = sqlite3.connect('pmcv2-full.db') c = conn.cursor() c.execute('''CREATE TABLE authorfndict (authorabbr text, authorfn text, PRIMARY KEY (authorabbr))''') c.execute('''DROP TABLE authorfndict''') c.execute('''SELECT fn, ln FROM authors''') authnames = c.fetchall() authorabbrs = dict() for entry in authnames: authorabbr = (entry[0]+entry[1]).replace(" ", "").lower() authorabbrs[authorabbr] = entry for entry in authorabbrs.iteritems(): #example item: (u'jiarongmiao', (u'Jiarong', u'Miao')) c.execute("INSERT INTO authorfndict (authorabbr, authorfn) VALUES (?, ?)", (entry[0], entry[1][0] +" " + entry[1][1])) c.execute('''COMMIT''') c.close() """ Explanation: create authors full name dict End of explanation """
Silmathoron/nest-simulator
doc/model_details/noise_generator.ipynb
gpl-2.0
import sympy sympy.init_printing() x = sympy.Symbol('x') sympy.series((1-sympy.exp(-x))/(1+sympy.exp(-x)), x) """ Explanation: The NEST noise_generator Hans Ekkehard Plesser, 2015-06-25 This notebook describes how the NEST noise_generator model works and what effect it has on model neurons. NEST needs to be in your PYTHONPATH to run this notebook. Basics The noise_generator emits a piecewise constant current that changes at fixed intervals $\delta$. For each interval, a new amplitude is chosen from the normal distribution. Each target neuron receives a different realization of the current. To be precise, the output current of the generator is given by $$I(t) = \mu + \sigma N_j \qquad\text{with $j$ such that}\quad j\delta < t \leq (j+1)\delta$$ where $N_j$ is the value drawn from the zero-mean unit-variance normal distribution for interval $j$ containing $t$. When using the generator with modulated variance, the noise current is given by $$I(t) = \mu + \sqrt{\sigma^2 + \sigma_m^2\sin(2\pi f j\delta + \frac{2\pi}{360}\phi_d)} N_j \;.$$ Mathematical symbols match model parameters as follows |Symbol|Parameter|Unit|Default|Description| |------|:--------|:---|------:|:----------| |$\mu$|mean|pA|0 pA|mean of the noise current amplitude| |$\sigma$|std|pA|0 pA|standard deviation of the noise current amplitude| |$\sigma_m$|std_mod|pA|0 pA|modulation depth of the std. deviation of the noise current amplitude| |$\delta$|dt|ms|1 ms|interval between current amplitude changes| |$f$|frequency|Hz|0 Hz| frequency of variance modulation| |$\phi_d$|phase|[deg]|0$^{\circ}$| phase of variance modulation| For the remainder of this document, we will only consider the current at time points $t_j=j\delta$ and define $$I_j = I(t_j+) = \mu + \sigma N_j $$ and correspondingly for the case of modulated noise. Note that $I_j$ is thus the current emitted during $(t_j, t_{j+1}]$, following NEST's use of left-open, right-closed intervals. We also set $\omega=2\pi f$ and $\phi=\frac{2\pi}{360}\phi_d$ for brevity. Properties of the noise current The noise current is a piecewise constant current. Thus, it is only an approximation to white noise and the properties of the noise will depend on the update interval $\delta$. The default update interval is $\delta = 1$ms. We chose this value so that the default would be independent from the time step $h$ of the simulation, assuming that time steps larger than 1 ms are rarely used. It also is plausible to assume that most time steps chosen will divide 1 ms evenly, so that changes in current amplitude will coincide with time steps. If this is not the case, the subsequent analysis does not apply exactly. The currents to all targets of a noise generator have different amplitudes, but always change simultaneously at times $j\delta$. Across an ensemble of targets or realizations, we have \begin{align} \langle I_j\rangle &= \mu \ \langle \Delta I_j^2\rangle &= \sigma^2 \qquad \text{without modulation} \ \langle \Delta I_j^2\rangle &= \sigma^2 + \sigma_m^2\sin( \omega j\delta + \phi) \qquad \text{with modulation.} \end{align} Without modulation, the autocorrelation of the noise is given by $$\langle (I_j-\mu) (I_k-\mu)\rangle = \sigma^2\delta_{jk}$$ where $\delta_{jk}$ is Kronecker's delta. With modulation, the autocorrlation is $$\langle (I_j-\mu) (I_k-\mu)\rangle = \sigma_j^2\delta_{jk}\qquad\text{where}\; \sigma_j = \sqrt{\sigma^2 + \sigma_m^2\sin( j\delta\omega + \phi_d)}\;.$$ Note that it is currently not possible to record this noise current directly in NEST, since a multimeter cannot record from a noise_generator. Noise generators effect on a neuron Precisely how a current injected into a neuron will affect that neuron, will obviously depend on the neuron itself. We consider here the subthreshold dynamics most widely used in NEST, namely the leaky integrator. The analysis that follows is applicable directly to all iaf_psc_* models. It applies to conductance based neurons such as the iaf_cond_* models only as long as no synaptic input is present, which changes the membrane conductances. Membrane potential dynamics We focus here only on subthreshold dynamics, i.e., we assume that the firing threshold of the neuron is $V_{\text{th}}=\infty$. We also ignore all synaptic input, which is valid for linear models, and set the resting potential $E_L=0$ mV for convenience. The membrane potential $V$ is then governed by $$\dot{V} = - \frac{V}{\tau} + \frac{I}{C}$$ where $\tau$ is the membrane time constant and $C$ the capacitance. We further assume $V(0)=0$ mV. We now focus on the membrane potential at times $t_j=j\delta$. Let $V_j=V(j\delta)$ be the membrane potential at time $t_j$. Then, a constant currant $I_j$ will be applied to the neuron until $t_{j+1}=t_j+\delta$, at which time the membrane potential will be $$V_{j+1} = V_j e^{-\delta/\tau} + \left(1-e^{-\delta/\tau}\right)\frac{I_j\tau}{C} \;.$$ We can apply this backward in time towards $V_0=0$ \begin{align} V_{j+1} &= V_j e^{-\delta/\tau} + \left(1-e^{-\delta/\tau}\right)\frac{I_j\tau}{C} \ &= \left[V_{j-1} e^{-\delta/\tau} + \left(1-e^{-\delta/\tau}\right)\frac{I_{j-1}\tau}{C}\right] e^{-\delta/\tau} + \left(1-e^{-\delta/\tau}\right)\frac{I_j\tau}{C} \ &= \left(1-e^{-\delta/\tau}\right)\frac{\tau}{C}\sum_{k=0}^{j} I_k e^{-(j-k)\delta/\tau} \ &= \left(1-e^{-\delta/\tau}\right)\frac{\tau}{C}\sum_{k=0}^{j} I_{k} e^{-k\delta/\tau} \;. \end{align} In the last step, we exploited the mutual independence of the random current amplitudes $I_k$, which allows us to renumber them arbitratily. Mean and variance of the membrane potential The mean of the membrane potential at $t_{j+1}$ is thus \begin{align} \langle V_{j+1}\rangle &= \left(1-e^{-\delta/\tau}\right)\frac{\tau}{C}\sum_{k=0}^{j} \langle I_{k} \rangle e^{-k\delta/\tau}\ &= \frac{\mu\tau}{C}\left(1-e^{-\delta/\tau}\right)\sum_{k=0}^{j} e^{-k\delta/\tau}\ &= \frac{\mu\tau}{C}\left(1-e^{-(j+1)\delta/\tau}\right)\ &= \frac{\mu\tau}{C}\left(1-e^{-t_{j+1}/\tau}\right) \end{align} as expected; note that we used the geometric sum formula in the second step. To obtain the variance of the membrane potential at $t_{j+1}$, we first compute the second moment $$\langle V_{j+1}^2 \rangle = \frac{\tau^2}{C^2}\left(1-e^{-\delta/\tau}\right)^2 \left\langle\left(\sum_{k=0}^{j} I_{k} e^{-k\delta/\tau}\right)^2\right\rangle$$ Substituting $q = e^{-\delta/\tau}$ and $\alpha = \frac{\tau^2}{C^2}\left(1-e^{-\delta/\tau}\right)^2= \frac{\tau^2}{C^2}\left(1-q\right)^2$ and , we have \begin{align} \langle V_{j+1}^2 \rangle &= \alpha \left\langle\left(\sum_{k=0}^{j} I_{k} q^k\right)^2\right\rangle \ &= \alpha \sum_{k=0}^{j} \sum_{m=0}^{j} \langle I_k I_m \rangle q^{k+m} \ &= \alpha \sum_{k=0}^{j} \sum_{m=0}^{j} (\mu^2 + \sigma_k^2 \delta_{km}) q^{k+m} \ &= \alpha \mu^2 \left(\sum_{k=0}^j q^k\right)^2 + \alpha \sum_{k=0}^{j} \sigma_k^2 q^{2k} \ &= \langle V_{j+1}\rangle^2 + \alpha \sum_{k=0}^{j} \sigma_k^2 q^{2k} \;. \end{align} Evaluating the remaining sum for the modulate case will be tedious, so we focus for now on the unmodulated case, i.e., $\sigma\equiv\sigma_k$, so that we again are left with a geometric sum, this time over $q^2$. We can now subtract the square of the mean to obtain the variance \begin{align} \langle (\Delta V_{j+1})^2 \rangle &= \langle V_{j+1}^2 \rangle - \langle V_{j+1}\rangle^2 \ &= \alpha \sigma^2 \frac{q^{2(j+1)}-1}{q^2-1} \ &= \frac{\sigma^2\tau^2}{C^2} (1-q)^2 \frac{q^{2(j+1)}-1}{q^2-1} \ &= \frac{\sigma^2\tau^2}{C^2} \frac{1-q}{1+q}\left(1-q^{2(j+1)}\right) \ &= \frac{\sigma^2\tau^2}{C^2} \frac{1-e^{-\delta/\tau}}{1+e^{-\delta/\tau}}\left(1-e^{-2t_{j+1}/\tau}\right) \;. \end{align} In the last step, we used that $1-q^2=(1-q)(1+q)$. The last term in this expression describes the approach of the variance of the membrane potential to its steady-state value. The fraction in front of it describes the effect of switching current amplitudes at intervals $\delta$ instead of instantenously as in real white noise. We now have in the long-term limit $$\langle (\Delta V)^2 \rangle = \lim_{j\to\infty} \langle (\Delta V_{j+1})^2 \rangle = \frac{\sigma^2\tau^2}{C^2} \frac{1-e^{-\delta/\tau}}{1+e^{-\delta/\tau}} \;. $$ We expand the fraction: End of explanation """ import math import numpy as np import scipy import matplotlib.pyplot as plt %matplotlib inline def noise_params(V_mean, V_std, dt=1.0, tau_m=10., C_m=250.): 'Returns mean and std for noise generator for parameters provided; defaults for iaf_psc_alpha.' return C_m / tau_m * V_mean, math.sqrt(2/(tau_m*dt))*C_m*V_std def V_asymptotic(mu, sigma, dt=1.0, tau_m=10., C_m=250.): 'Returns asymptotic mean and std of V_m' V_mean = mu * tau_m / C_m V_std = (sigma * tau_m / C_m) * np.sqrt(( 1 - math.exp(-dt/tau_m) ) / ( 1 + math.exp(-dt/tau_m) )) return V_mean, V_std def V_mean(t, mu, tau_m=10., C_m=250.): 'Returns predicted voltage for given times and parameters.' vm, _ = V_asymptotic(mu, sigma, tau_m=tau_m, C_m=C_m) return vm * ( 1 - np.exp( - t / tau_m ) ) def V_std(t, sigma, dt=1.0, tau_m=10., C_m=250.): 'Returns predicted variance for given times and parameters.' _, vms = V_asymptotic(mu, sigma, dt=dt, tau_m=tau_m, C_m=C_m) return vms * np.sqrt(1 - np.exp(-2*t/tau_m)) import nest def simulate(mu, sigma, dt=1.0, tau_m=10., C_m=250., N=1000, t_max=50.): ''' Simulate an ensemble of N iaf_neurons driven by noise_generator. Returns - voltage matrix, one column per neuron - time axis indexing matrix rows - time shift due to delay, time at which first current arrives ''' resolution = 0.1 delay = 1.0 nest.ResetKernel() nest.SetKernelStatus({'resolution': resolution}) ng = nest.Create('noise_generator', params={'mean': mu, 'std': sigma, 'dt': dt}) vm = nest.Create('voltmeter', params={'interval': resolution}) nrns = nest.Create('iaf_psc_alpha', N, params={'E_L': 0., 'V_m': 0., 'V_th': 1e6, 'tau_m': tau_m, 'C_m': C_m}) nest.Connect(ng, nrns, syn_spec={'delay': delay}) nest.Connect(vm, nrns) nest.Simulate(t_max) # convert data into time axis vector and matrix with one column per neuron ev = nest.GetStatus(vm, keys=['events'])[0][0] t, s, v = ev['times'], ev['senders'], ev['V_m'] tix = np.array(np.round(( t - t.min() ) / resolution), dtype=int) sx = np.unique(s) assert len(sx) == N six = s - s.min() V = np.zeros((tix.max()+1, N)) for ix, vm in enumerate(v): V[tix[ix], six[ix]] = vm # time shift due to delay and onset after first step t_shift = delay + resolution return V, np.unique(t), t_shift """ Explanation: We thus have for $\delta \ll \tau$ and $t\gg\tau$ $$\langle (\Delta V)^2 \rangle \approx \frac{\delta\tau \sigma^2 }{2 C^2} \;.$$ How to obtain a specific mean and variance of the potential In order to obtain a specific mean membrane potential $\bar{V}$ with standard deviation $\Sigma$ for given neuron parameters $\tau$ and $C$ and fixed current-update interval $\delta$, we invert the expressions obtained above. For the mean, we have for $t\to\infty$ $$\langle V\rangle = \frac{\mu\tau}{C} \qquad\Rightarrow\qquad \mu = \frac{C}{\tau} \bar{V}$$ and for the standard deviation $$\langle (\Delta V)^2 \rangle \approx \frac{\delta\tau \sigma^2 }{2 C^2} \qquad\Rightarrow\qquad \sigma = \sqrt{\frac{2}{\delta\tau}}C\Sigma \;.$$ Tests and examples We will now test the expressions derived above against NEST. We first define some helper functions. End of explanation """ dt = 1.0 mu, sigma = noise_params(0., 1., dt=dt) print("mu = {:.2f}, sigma = {:.2f}".format(mu, sigma)) V, t, ts = simulate(mu, sigma, dt=dt) V_mean_th = V_mean(t, mu) V_std_th = V_std(t, sigma, dt=dt) plt.plot(t, V.mean(axis=1), 'b-', label=r'$\bar{V_m}$') plt.plot(t + ts, V_mean_th, 'b--', label=r'$\langle V_m \rangle$') plt.plot(t, V.std(axis=1), 'r-', label=r'$\sqrt{\bar{\Delta V_m^2}}$') plt.plot(t + ts, V_std_th, 'r--', label=r'$\sqrt{\langle (\Delta V_m)^2 \rangle}$') plt.legend() plt.xlabel('Time $t$ [ms]') plt.ylabel('Membrane potential $V_m$ [mV]') plt.xlim(0, 50); """ Explanation: A first test simulation End of explanation """ dt = 1.0 mu, sigma = noise_params(2., 1., dt=dt) print("mu = {:.2f}, sigma = {:.2f}".format(mu, sigma)) V, t, ts = simulate(mu, sigma, dt=dt) V_mean_th = V_mean(t, mu) V_std_th = V_std(t, sigma, dt=dt) plt.plot(t, V.mean(axis=1), 'b-', label=r'$\bar{V_m}$') plt.plot(t + ts, V_mean_th, 'b--', label=r'$\langle V_m \rangle$') plt.plot(t, V.std(axis=1), 'r-', label=r'$\sqrt{\bar{\Delta V_m^2}}$') plt.plot(t + ts, V_std_th, 'r--', label=r'$\sqrt{\langle (\Delta V_m)^2 \rangle}$') plt.legend() plt.xlabel('Time $t$ [ms]') plt.ylabel('Membrane potential $V_m$ [mV]') plt.xlim(0, 50); """ Explanation: Theory and simulation are in excellent agreement. The regular "drops" in the standard deviation are a consquence of the piecewise constant current and the synchronous switch in current for all neurons. It is discussed in more detail below. A case with non-zero mean We repeat the previous simulation, but now with non-zero mean current. End of explanation """ dt = 0.1 mu, sigma = noise_params(0., 1., dt=dt) print("mu = {:.2f}, sigma = {:.2f}".format(mu, sigma)) V, t, ts = simulate(mu, sigma, dt=dt) V_mean_th = V_mean(t, mu) V_std_th = V_std(t, sigma, dt=dt) plt.plot(t, V.mean(axis=1), 'b-', label=r'$\bar{V_m}$') plt.plot(t + ts, V_mean_th, 'b--', label=r'$\langle V_m \rangle$') plt.plot(t, V.std(axis=1), 'r-', label=r'$\sqrt{\bar{\Delta V_m^2}}$') plt.plot(t + ts, V_std_th, 'r--', label=r'$\sqrt{\langle (\Delta V_m)^2 \rangle}$') plt.legend() plt.xlabel('Time $t$ [ms]') plt.ylabel('Membrane potential $V_m$ [mV]') plt.xlim(0, 50); """ Explanation: We again observe excellent agreement between theory and simulation. Shorter and longer switching intervals We now repeat the previous simulation for zero mean with shorter ($\delta=0.1$ ms) and longer ($\delta=10$ ms) switching intervals. End of explanation """ dt = 10.0 mu, sigma = noise_params(0., 1., dt=dt) print("mu = {:.2f}, sigma = {:.2f}".format(mu, sigma)) V, t, ts = simulate(mu, sigma, dt=dt) V_mean_th = V_mean(t, mu) V_std_th = V_std(t, sigma, dt=dt) plt.plot(t, V.mean(axis=1), 'b-', label=r'$\bar{V_m}$') plt.plot(t + ts, V_mean_th, 'b--', label=r'$\langle V_m \rangle$') plt.plot(t, V.std(axis=1), 'r-', label=r'$\sqrt{\bar{\Delta V_m^2}}$') plt.plot(t + ts, V_std_th, 'r--', label=r'$\sqrt{\langle (\Delta V_m)^2 \rangle}$') plt.legend() plt.xlabel('Time $t$ [ms]') plt.ylabel('Membrane potential $V_m$ [mV]') plt.xlim(0, 50); """ Explanation: Again, agreement is fine and the slight drooping artefacts are invisible, since the noise is now updated on every time step. Note also that the noise standard deviation $\sigma$ is larger (by $\sqrt{10}$) than for $\delta=1$ ms. End of explanation """ plt.plot(t, V[:, :25], lw=3, alpha=0.5); plt.plot([31.1, 31.1], [-3, 3], 'k--', lw=2) plt.plot([41.1, 41.1], [-3, 3], 'k--', lw=2) plt.xlabel('Time $t$ [ms]') plt.ylabel('Membrane potential $V_m$ [mV]') plt.xlim(30, 42); plt.ylim(-2.1, 2.1); """ Explanation: For $\delta=10$, i.e., a noise switching time equal to $\tau_m$, the drooping artefact becomes clearly visible. Note that our theory developed above only applies to the points at which the input current switches, i.e., at multiples of $\delta$, beginning with the arrival of the first current at the neuron (at delay plus one time step). At those points, agreement with theory is good. Why does the standard deviation dip between current updates? In the last case, where $\delta = \tau_m$, the dips in the membrane potential between changes in the noise current become quite large. They can be explained as follows. For large $\delta$, we have at the end of a $\delta$-interval for neuron $n$ membrane potential $V_n(t_{j})\approx I_{n,j-1}\tau/C$ and these values will be distributed across neurons with standard deviation $\sqrt{\langle (\Delta V_m)^2 \rangle}$. Then, input currents of all neurons switch to new values $I_{n,j}$ and the membrane potential of each neuron now evolves towards $V_n(t_{j+1})\approx I_{n,j}\tau/C$. Since current values are independent of each other, this means that membrane-potential trajectories criss-cross each other, constricting the variance of the membrane potential before they approach their new steady-state values, as illustrated below. You should therefore use short switching times $\delta$. End of explanation """ from scipy.signal import fftconvolve from statsmodels.tsa.stattools import acf def V_autocorr(V_mean, V_std, dt=1., tau_m=10.): 'Returns autocorrelation of membrane potential and pertaining time axis.' mu, sigma = noise_params(V_mean, V_std, dt=dt, tau_m=tau_m) V, t, ts = simulate(mu, sigma, dt=dt, tau_m=tau_m, t_max=5000., N=20) # drop the first second V = V[t>1000., :] # compute autocorrelation columnwise, then average over neurons nlags = 1000 nt, nn = V.shape acV = np.zeros((nlags+1, nn)) for c in range(V.shape[1]): acV[:, c] = acf(V[:, c], unbiased=True, nlags=1000, fft=True) #fftconvolve(V[:, c], V[::-1, c], mode='full') / V[:, c].std()**2 acV = acV.mean(axis=1) # time axis dt = t[1] - t[0] acT = np.arange(0, nlags+1) * dt return acV, acT acV_01, acT_01 = V_autocorr(0., 1., 0.1) acV_10, acT_10 = V_autocorr(0., 1., 1.0) acV_50, acT_50 = V_autocorr(0., 1., 5.0) plt.plot(acT_01, acV_01, label=r'$\delta = 0.1$ms'); plt.plot(acT_10, acV_10, label=r'$\delta = 1.0$ms'); plt.plot(acT_50, acV_50, label=r'$\delta = 5.0$ms'); plt.xlim(0, 50); plt.ylim(-0.1, 1.05); plt.legend(); plt.xlabel(r'Delay $\tau$ [ms]') plt.ylabel(r'$\langle V(t)V(t+\tau)\rangle$'); """ Explanation: Autocorrelation We briefly look at the autocorrelation of the membrane potential for three values of $\delta$. End of explanation """ acV_t01, acT_t01 = V_autocorr(0., 1., 0.1, 1.) acV_t05, acT_t05 = V_autocorr(0., 1., 0.1, 5.) acV_t10, acT_t10 = V_autocorr(0., 1., 0.1, 10.) plt.plot(acT_t01, acV_t01, label=r'$\tau_m = 1$ms'); plt.plot(acT_t05, acV_t05, label=r'$\tau_m = 5$ms'); plt.plot(acT_t10, acV_t10, label=r'$\tau_m = 10$ms'); plt.xlim(0, 50); plt.ylim(-0.1, 1.05); plt.legend(); plt.xlabel(r'Delay $\tau$ [ms]') plt.ylabel(r'$\langle V(t)V(t+\tau)\rangle$'); """ Explanation: We see that the autocorrelation is clearly dominated by the membrane time constant of $\tau_m=10$ ms. The switching time $\delta$ has a lesser effect, although it is noticeable for $\delta=5$ ms. Different membrane time constants To document the influence of the membrane time constant, we compute the autocorrelation function for three different $\tau_m$. End of explanation """
Kaggle/learntools
notebooks/ml_explainability/raw/tut5_shap_advanced.ipynb
apache-2.0
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier data = pd.read_csv('../input/fifa-2018-match-statistics/FIFA 2018 Statistics.csv') y = (data['Man of the Match'] == "Yes") # Convert from string "Yes"/"No" to binary feature_names = [i for i in data.columns if data[i].dtype in [np.int64, np.int64]] X = data[feature_names] train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1) my_model = RandomForestClassifier(random_state=0).fit(train_X, train_y) """ Explanation: Recap We started by learning about permutation importance and partial dependence plots for an overview of what the model has learned. We then learned about SHAP values to break down the components of individual predictions. Now we'll expand on SHAP values, seeing how aggregating many SHAP values can give more detailed alternatives to permutation importance and partial dependence plots. SHAP Values Review Shap values show how much a given feature changed our prediction (compared to if we made that prediction at some baseline value of that feature). For example, consider an ultra-simple model: $$y = 4 * x1 + 2 * x2$$ If $x1$ takes the value 2, instead of a baseline value of 0, then our SHAP value for $x1$ would be 8 (from 4 times 2). These are harder to calculate with the sophisticated models we use in practice. But through some algorithmic cleverness, Shap values allow us to decompose any prediction into the sum of effects of each feature value, yielding a graph like this: Link to larger view* In addition to this nice breakdown for each prediction, the Shap library offers great visualizations of groups of Shap values. We will focus on two of these visualizations. These visualizations have conceptual similarities to permutation importance and partial dependence plots. So multiple threads from the previous exercises will come together here. Summary Plots Permutation importance is great because it created simple numeric measures to see which features mattered to a model. This helped us make comparisons between features easily, and you can present the resulting graphs to non-technical audiences. But it doesn't tell you how each features matter. If a feature has medium permutation importance, that could mean it has - a large effect for a few predictions, but no effect in general, or - a medium effect for all predictions. SHAP summary plots give us a birds-eye view of feature importance and what is driving it. We'll walk through an example plot for the soccer data: This plot is made of many dots. Each dot has three characteristics: - Vertical location shows what feature it is depicting - Color shows whether that feature was high or low for that row of the dataset - Horizontal location shows whether the effect of that value caused a higher or lower prediction. For example, the point in the upper left was for a team that scored few goals, reducing the prediction by 0.25. Some things you should be able to easily pick out: - The model ignored the Red and Yellow &amp; Red features. - Usually Yellow Card doesn't affect the prediction, but there is an extreme case where a high value caused a much lower prediction. - High values of Goal scored caused higher predictions, and low values caused low predictions If you look for long enough, there's a lot of information in this graph. You'll face some questions to test how you read them in the exercise. Summary Plots in Code You have already seen the code to load the soccer/football data: End of explanation """ import shap # package used to calculate Shap values # Create object that can calculate shap values explainer = shap.TreeExplainer(my_model) # calculate shap values. This is what we will plot. # Calculate shap_values for all of val_X rather than a single row, to have more data for plot. shap_values = explainer.shap_values(val_X) # Make plot. Index of [1] is explained in text below. shap.summary_plot(shap_values[1], val_X) """ Explanation: We get the SHAP values for all validation data with the following code. It is short enough that we explain it in the comments. End of explanation """ import shap # package used to calculate Shap values # Create object that can calculate shap values explainer = shap.TreeExplainer(my_model) # calculate shap values. This is what we will plot. shap_values = explainer.shap_values(X) # make plot. shap.dependence_plot('Ball Possession %', shap_values[1], X, interaction_index="Goal Scored") """ Explanation: The code isn't too complex. But there are a few caveats. When plotting, we call shap_values[1]. For classification problems, there is a separate array of SHAP values for each possible outcome. In this case, we index in to get the SHAP values for the prediction of "True". Calculating SHAP values can be slow. It isn't a problem here, because this dataset is small. But you'll want to be careful when running these to plot with reasonably sized datasets. The exception is when using an xgboost model, which SHAP has some optimizations for and which is thus much faster. This provides a great overview of the model, but we might want to delve into a single feature. That's where SHAP dependence contribution plots come into play. SHAP Dependence Contribution Plots We've previously used Partial Dependence Plots to show how a single feature impacts predictions. These are insightful and relevant for many real-world use cases. Plus, with a little effort, they can be explained to a non-technical audience. But there's a lot they don't show. For instance, what is the distribution of effects? Is the effect of having a certain value pretty constant, or does it vary a lot depending on the values of other feaures. SHAP dependence contribution plots provide a similar insight to PDP's, but they add a lot more detail. Start by focusing on the shape, and we'll come back to color in a minute. Each dot represents a row of the data. The horizontal location is the actual value from the dataset, and the vertical location shows what having that value did to the prediction. The fact this slopes upward says that the more you possess the ball, the higher the model's prediction is for winning the Man of the Match award. The spread suggests that other features must interact with Ball Possession %. For example, here we have highlighted two points with similar ball possession values. That value caused one prediction to increase, and it caused the other prediction to decrease. For comparison, a simple linear regression would produce plots that are perfect lines, without this spread. This suggests we delve into the interactions, and the plots include color coding to help do that. While the primary trend is upward, you can visually inspect whether that varies by dot color. Consider the following very narrow example for concreteness. These two points stand out spatially as being far away from the upward trend. They are both colored purple, indicating the team scored one goal. You can interpret this to say In general, having the ball increases a team's chance of having their player win the award. But if they only score one goal, that trend reverses and the award judges may penalize them for having the ball so much if they score that little. Outside of those few outliers, the interaction indicated by color isn't very dramatic here. But sometimes it will jump out at you. Dependence Contribution Plots in Code We get the dependence contribution plot with the following code. The only line that's different from the summary_plot is the last line. End of explanation """
vravishankar/Jupyter-Books
Errors+and+Exceptions.ipynb
mit
print('Hello) """ Explanation: Errors and Exceptions While executing a python program we may encounter errors. There are 2 types of errors: Syntax Errors - When you don't follow the proper structure of the python program (Like missing a quote during initialising a string). Exceptions - Sometimes even when the syntax is correct, errors may occur when the program is run or executed. These run time errors are called Exceptions (like trying to divide by zero or file does not exist). If Exceptions are not handled properly, the program will crash and come to a sudden & unexpected halt. Syntax Errors End of explanation """ 1 / 0 open('doesnotexistfile.txt') """ Explanation: Exceptions End of explanation """ print(locals()['__builtins__']) """ Explanation: Built-in Exceptions Python creates an Exception object whenever a runtime error occurs. There are a number of built-in exceptions. End of explanation """ import sys def divide(a,b): try: return a / b except: print(sys.exc_info()[0]) divide (1,2) divide (2,0) # This will be captured by the 'except' clause # print custom error message def divide(a,b): try: return a / b except: print('Error occured',sys.exc_info()[0]) divide (1,2) divide (2,0) # This will be captured by the 'except' clause """ Explanation: Following are some of the built-in exceptions. ZeroDivisionError - Raised when you try to divide a number by zero FileNotFoundError - Raised when a file required does not exist SyntaxError - Raised when proper syntax is not applied NameError - Raised when a variable is not found in local or global scope KeyError - Raised when a key is not found in a dictionary Handling Exceptions Python provides 'try/except' statements to handle the exceptions. The operation which can raise exception is placed inside the 'try' statement and code that handles exception is written in the 'except' clause. End of explanation """ def divide(a,b): try: return a / b except (ZeroDivisionError): print('Number cannot be divided by zero or non-integer') except: print('Error Occured',sys.exc_info()[0]) divide (1,2) divide (2,0) # This will be captured by the 'except - zero division error' clause divide (2,'a') # This will be captured by the generic 'except' clause def divide(a,b): try: return a / b except (ZeroDivisionError, TypeError): # use a tuple to capture multiple errors print('Number cannot be divided by zero or non-integer') except: print('Error Occured',sys.exc_info()[0]) divide (1,2) divide (2,0) # This will be captured by the 'except - zero division error' clause divide (2,'a') # This will be captured by the generic 'except' clause """ Explanation: Catching Specific Exceptions A try clause can have any number of except clause to capture specific exceptions and only one will be executed in case an exception occurs. We can use tuple of values to specify multiple exceptions in an exception clause End of explanation """ import sys try: f = open('myfile.txt') s = f.readline() i = int(s.strip()) except OSError as err: print("OS error: {0}".format(err)) except ValueError: print("Could not convert data to an integer.") except: print("Unexpected error:", sys.exc_info()[0]) raise """ Explanation: The last except clause may omit the exception name(s), to serve as a wildcard. Use this with extreme caution, since it is easy to mask a real programming error in this way! It can also be used to print an error message and then re-raise the exception (allowing a caller to handle the exception as well): End of explanation """ for arg in sys.argv[1:]: try: f = open(arg, 'r') except OSError: print('cannot open', arg) else: print(arg, 'has', len(f.readlines()), 'lines') f.close() """ Explanation: The try … except statement has an optional else clause, which, when present, must follow all except clauses. It is useful for code that must be executed if the try clause does not raise an exception. For example: End of explanation """ try: raise Exception('1002','Custom Exception Occured') except Exception as inst: print(type(inst)) print(inst) print(inst.args) errno, errdesc = inst.args print('Error Number:',errno) print('Error Description:',errdesc) """ Explanation: The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn’t raised by the code being protected by the try … except statement. Exception Instances The except clause may specify a variable after the exception name. The variable is bound to an exception instance with the arguments stored in instance.args. For convenience, the exception instance defines str() so the arguments can be printed directly without having to reference .args. End of explanation """ def func_will_fail(): return 1 / 0 try: func_will_fail() except ZeroDivisionError as err: print('Handling Error - ',err) """ Explanation: Exception handlers don’t just handle exceptions if they occur immediately in the try clause, but also if they occur inside functions that are called (even indirectly) in the try clause. For example: End of explanation """ raise NameError('Error Occured') """ Explanation: Raise Exceptions The raise statement allows the programmer to force a specified exception to occur. For example: End of explanation """ try: raise NameError('Error Captured') except NameError: print('Captured Exception') raise """ Explanation: If you need to determine whether an exception was raised but don’t intend to handle it, a simpler form of the raise statement allows you to re-raise the exception: End of explanation """ class CustomError(Exception): pass raise CustomError() raise CustomError('Unexpected Error Occured') """ Explanation: User Exceptions Python has many built-in exceptions which forces your program to output an error when something in it goes wrong. However, sometimes you may need to create custom exceptions that serves your purpose. In Python, users can define such exceptions by creating a new class. This exception class has to be derived, either directly or indirectly, from Exception class. Most of the built-in exceptions are also derived form this class. End of explanation """ # define Python user-defined exceptions class Error(Exception): """Base class for other exceptions""" pass class ValueTooSmallError(Error): """Raised when the input value is too small""" pass class ValueTooLargeError(Error): """Raised when the input value is too large""" pass # our main program # user guesses a number until he/she gets it right # you need to guess this number number = 10 while True: try: i_num = int(input("Enter a number: ")) if i_num < number: raise ValueTooSmallError elif i_num > number: raise ValueTooLargeError break except ValueTooSmallError: print("This value is too small, try again!") print() except ValueTooLargeError: print("This value is too large, try again!") print() print("Congratulations! You guessed it correctly.") """ Explanation: Here, we have created a user-defined exception called CustomError which is derived from the Exception class. This new exception can be raised, like other exceptions, using the raise statement with an optional error message. Point to Note When we are developing a large Python program, it is a good practice to place all the user-defined exceptions that our program raises in a separate file. Many standard modules do this. They define their exceptions separately as exceptions.py or errors.py (generally but not always). Most exceptions are defined with names that end in “Error,” similar to the naming of the standard exceptions. End of explanation """ class Error(Exception): """Base class for exceptions in this module.""" pass class InputError(Error): """Exception raised for errors in the input. Attributes: expression -- input expression in which the error occurred message -- explanation of the error """ def __init__(self, expression, message): self.expression = expression self.message = message class TransitionError(Error): """Raised when an operation attempts a state transition that's not allowed. Attributes: previous -- state at beginning of transition next -- attempted new state message -- explanation of why the specific transition is not allowed """ def __init__(self, previous, next, message): self.previous = previous self.next = next self.message = message """ Explanation: Here, we have defined a base class called Error. The other two exceptions (ValueTooSmallError and ValueTooLargeError) that are actually raised by our program are derived from this class. This is the standard way to define user-defined exceptions in Python programming. Many standard modules define their own exceptions to report errors that may occur in functions they define. A detailed example is given below: End of explanation """ try: raise KeyBoardInterrupt finally: print('Bye') def divide(a,b): try: result = a / b except ZeroDivisionError: print('Number cannot be divided by zero') else: print('Result',result) finally: print('Executed Finally Clause') divide(2,1) divide(2,0) divide('1','2') """ Explanation: Clean up Actions The try statement in Python can have an optional finally clause. This clause is executed no matter what, and is generally used to release external resources. A finally clause is always executed before leaving the try statement, whether an exception has occurred or not. When an exception has occurred in the try clause and has not been handled by an except clause (or it has occurred in an except or else clause), it is re-raised after the finally clause has been executed. The finally clause is also executed “on the way out” when any other clause of the try statement is left via a break, continue or return statement. End of explanation """ for line in open("myfile.txt"): print(line, end="") """ Explanation: Please note that the TypeError raised by dividing two strings is not handled by the except clause and therefore re-raised after the finally clause has been executed. Pre Clean up Actions Some objects define standard clean-up actions to be undertaken when the object is no longer needed. Look at the following example, which tries to open a file and print its contents to the screen. End of explanation """ with open("test.txt") as f: for line in f: print(line, end="") """ Explanation: The problem with this code is that it leaves the file open for an indeterminate amount of time after this part of the code has finished executing. This is not a best practice. End of explanation """
markdewing/qmc_algorithms
Wavefunctions/Explain_Bspline.ipynb
mit
xs = Symbol('x') knots = [0,1,2,3,4,5,6] # Third-order bspline sym_basis = bspline_basis_set(3, knots, xs) # Form for one basis function sym_basis[0] # Plot some basis functions nbasis_to_plot = 3 npoints_to_plot = 40 basis_y = np.zeros((nbasis_to_plot, npoints_to_plot)) xvals = np.zeros(npoints_to_plot) for i in range(npoints_to_plot): xv = i*.1 + 1.0 for j in range(3): basis_y[j,i] = sym_basis[j].subs(xs,xv) xvals[i] = xv plt.plot(xvals, basis_y[0,:], xvals, basis_y[1,:], xvals, basis_y[2,:]) """ Explanation: Bspline basis End of explanation """ # Need knot values outside the target interval to generate the right set of basis functions. knots = [0,1,2,3,4,5] all_knots = [-3,-2,-1,0,1,2,3,4,5,6,7,8] # Third-order bspline sym_basis = bspline_basis_set(3, all_knots, xs) print("Number of basis functions = ",len(sym_basis)) sym_basis """ Explanation: Function approximation To fully represent an interval (say, 0.0 - 5.0, with a knot spacing of 1.0), we need parts of basis functions outside the interval as well. To approximate a function, we evaluate at the M knots (there are 6 knots for this example). There will be M+2 basis functions. End of explanation """ # Fill out the coefficient matrix mat = Matrix.eye(len(knots)+2) for i,k in enumerate(knots): for j,basis in enumerate(sym_basis): bv = basis.subs(xs,k) mat[i+1,j] = bv # Natural boundary conditions - set 2nd derivative at end of range to zero dd_spline = [diff(bs, xs, 2) for bs in sym_basis] row0 = [dds.subs(xs, 0) for dds in dd_spline] rowN = [dds.subs(xs, knots[-1]) for dds in dd_spline] display(row0) display(rowN) for i in range(len(row0)): mat[0,i] = row0[i] mat[-1,i] = rowN[i] mat # Let us assume a simple quadratic function for interpolation func_to_approx = [k*k for k in knots] func_to_approx """ Explanation: To approximate a function we need coefficients for each basis function $$ f(x) = \sum_0^{M+2} c_i B_i(x) $$ The values of the function at the knots provides $M$ constraints. We still need 2 more to fully specify the coefficients. This is where the boundary conditions come into play. $$ \sum_0^{M+2} c_i B_i(x_j) = g(x_j) $$ End of explanation """ lhs_vals = [0] + func_to_approx + [0] # Zeros are the value of the second derivative coeffs = mat.LUsolve(Matrix(len(lhs_vals), 1, lhs_vals)) coeffs.T.tolist()[0] """ Explanation: Solve for coefficients End of explanation """ def spline_eval(basis, coeffs, x): val = 0.0 for c,bs in zip(coeffs, basis): val += c*bs.subs(xs,x) return val # check that it reproduces the knots for k,v in zip(knots,func_to_approx): print(k,spline_eval(sym_basis, coeffs, k),v) # Now check elsewhere xvals = [] yvals = [] for i in range(50): x = .1*i val = spline_eval(sym_basis, coeffs, x) xvals.append(x) yvals.append(val) plt.plot(xvals, yvals) #plt.plot(xvals, yvals, knots, func_to_approx) """ Explanation: Evaluate the spline End of explanation """ def to_interval(ival): """Convert relational expression to an Interval""" min_val = None lower_open = False max_val = None upper_open = True if isinstance(ival, And): for rel in ival.args: #print('rel ',rel, type(rel), rel.args[1]) if isinstance(rel, StrictGreaterThan): min_val = rel.args[1] #lower_open = True elif isinstance(rel, GreaterThan): min_val = rel.args[1] #lower_open = False elif isinstance(rel, StrictLessThan): max_val = rel.args[1] #upper_open = True elif isinstance(rel, LessThan): max_val = rel.args[1] #upper_open = False else: print('unhandled ',rel) if min_val == None or max_val == None: print('error',ival) return Interval(min_val, max_val, lower_open, upper_open) # Transpose the interval and coefficients # Note that interval [0,1) has the polynomial coefficients found in the einspline code # The other intervals could be shifted, and they would also have the same polynomials def transpose_interval_and_coefficients(sym_basis): cond_map = defaultdict(list) i1 = Interval(0,5, False, False) # interval for evaluation for idx, s0 in enumerate(sym_basis): for expr, cond in s0.args: if cond != True: i2 = to_interval(cond) if not i1.is_disjoint(i2): cond_map[i2].append( (idx, expr) ) return cond_map cond_map = transpose_interval_and_coefficients(sym_basis) for cond, expr in cond_map.items(): #print(cond, [e.subs(x, x-cond.args[0]) for e in expr]) print("Interval = ",cond) # Shift interval to a common start - see that the polynomial coefficients are all the same #e2 = [expand(e[1].subs(xs, xs+cond.args[0])) for e in expr] e2 = [(idx,expand(e)) for idx, e in expr] display(e2) # Create piecewise expression from the transposed intervals def recreate_piecewise(basis_map, c): args = [] for cond, exprs in basis_map.items(): e = 0 for idx, b in exprs: e += c[idx] * b args.append( (e, cond.as_relational(xs))) return Piecewise(*args) c = IndexedBase('c') spline = recreate_piecewise(cond_map, c) spline def spline_eval2(spline, coeffs, x): """Evaluate spline using transposed expression""" val = 0.0 c = IndexedBase('c') to_sub = {} for i,cf in enumerate(coeffs): to_sub[c[i]] = cf to_sub[xs] = x return spline.subs(to_sub) for k in knots: val = spline_eval2(spline, coeffs, k) print(k,val) """ Explanation: Matching Einspline The einspline code collects the values of each power of x in each interval. The current form of the basis functions is a piecewise interval on the inside, and multiplication by the coefficients on the outside. We would like to transpose this representation so that the intervals are on the outside, and the constributions from each coefficient are on the inside. This will require some examination of the Sympy representation for intervals. End of explanation """ Delta = Symbol('Delta',positive=True) #knots = [0,1*Delta,2*Delta,3*Delta,4*Delta,5*Delta] nknots = 6 knots = [i*Delta for i in range(nknots)] display('knots = ',knots) #all_knots = [-3*Delta,-2*Delta,-1*Delta,0,1*Delta,2*Delta,3*Delta,4*Delta,5*Delta,6*Delta,7*Delta,8*Delta] all_knots = [i*Delta for i in range(-3,nknots+3)] #display('all knots',all_knots) rcut = (nknots-1)*Delta # Third-order bspline jastrow_sym_basis = bspline_basis_set(3, all_knots, xs) print("Number of basis functions = ",len(jastrow_sym_basis)) #jastrow_sym_basis # Now create the spline from the basis functions jastrow_cond_map = transpose_interval_and_coefficients(jastrow_sym_basis) c = IndexedBase('c',shape=(nknots+3)) #c = MatrixSymbol('c',nknots+3,1) jastrow_spline = recreate_piecewise(jastrow_cond_map, c) jastrow_spline # Boundary conditions at r = 0 with the cusp (first derivative) condition cusp_val = Symbol('A') # Evaluate spline derivative at 0 du = diff(jastrow_spline,xs) du_zero = du.subs(xs, 0) display(du_zero) # Solve following equation display(Eq(du_zero-cusp_val,0)) # solve doesn't seem to work with Indexed value, substitute something else c0 = Symbol('c0') soln = solve(du_zero.subs(c[0],c0) - cusp_val, c0) display(Eq(c0, soln[0])) # Boundary conditions at r=r_cut. For smoothness the value and derivatives should be 0. # Add zero value and derivatives at r_cut eq_v = jastrow_spline.subs(xs, rcut) display(Eq(Symbol('u')(xs),eq_v)) eq_dv = diff(jastrow_spline, xs).subs(xs,rcut) display(Eq(diff(Symbol('u')(xs),xs),eq_dv)) eq_ddv = diff(jastrow_spline, xs, 2).subs(xs,rcut) display(Eq(diff(Symbol('u')(xs),xs,2),eq_ddv)) #eq_d3v = diff(jastrow_spline, xs, 3).subs(xs, rcut) #display(eq_d3v) # Solve for all these equal to zero c5 = Symbol('c5') c6 = Symbol('c6') c7 = Symbol('c7') subs_list = {c[5]:c5, c[6]:c6, c[7]:c7} linsolve([v.subs(subs_list) for v in [eq_v, eq_dv, eq_ddv]], [c5, c6, c7]) # QMCPACK enforces these conditions by setting c5, c6, c7 (the last three coefficients) to 0. """ Explanation: Bspline for Jastrow For the radial part of the Jastrow factor, the derivative at $r=0$ is fixed by the cusp condition. At $r=r_{cut}$, the value and derivatives are zero. Also add the grid spacing, $\Delta$, to the knots. End of explanation """
ProfessorKazarinoff/staticsite
content/code/flask/sqlite_play.ipynb
gpl-3.0
import sqlite3 db = sqlite3.connect("name_database.db") """ Explanation: To create a new database, we first import sqlite3 and then instantiate a new database object with the sqlite3.connect() method. End of explanation """ # create a database called name_database.db # add one table to the database called names_table # add columns to the database table: Id, first_name, last_name, age conn = sqlite3.connect('name_database.db') cur = conn.cursor() cur.execute("""CREATE TABLE IF NOT EXISTS names_table ( Id INTEGER PRIMARY KEY AUTOINCREMENT, first_name text, last_name text, age integer )""") conn.commit() cur.close() conn.close() db.close() """ Explanation: Next, we connect to the database with the sqlite3.connect() method and create a connection object called conn. Then, from the connection object conn, we create a cursor object called cur. The cursor object executes the database commands. The commands the cursor object cur executes are written in a database query language. Learning database query language is sort of like learning a whole new programming language. I am still note really familiar with the database language query commands or syntax. Before we can add records to the database, we need to create a table in the database. End of explanation """ conn = sqlite3.connect('name_database.db') cur = conn.cursor() cur.execute("INSERT INTO names_table VALUES(:Id, :first_name, :last_name, :age)", {'Id': None, 'first_name': 'Gabriella', 'last_name': 'Louise', 'age': int(8) }) conn.commit() cur.close() conn.close() """ Explanation: Now to add a new record to the database, we need to: connect to the database, creating a connection object conn create a cursor object cur based on the connection object execute commands on the cursor object cur to add a new record to the database commit the changes to the connection object conn close the cursor object close the connection object End of explanation """ conn = sqlite3.connect('name_database.db') cur = conn.cursor() cur.execute("SELECT first_name, last_name, age, MAX(rowid) FROM names_table") record = cur.fetchone() print(record) cur.close() conn.close() """ Explanation: Now let's see if we can retrieve the record we just added to the database. End of explanation """ conn = sqlite3.connect('name_database.db') cur = conn.cursor() cur.execute("INSERT INTO names_table VALUES(:Id, :first_name, :last_name, :age)", {'Id': None, 'first_name': 'Maelle', 'last_name': 'Levin', 'age': int(5) }) conn.commit() cur.close() conn.close() """ Explanation: Let's add another record to the database End of explanation """ conn = sqlite3.connect('name_database.db') cur = conn.cursor() cur.execute("SELECT first_name, last_name, age, MAX(rowid) FROM names_table") record = cur.fetchone() print(record) cur.close() conn.close() """ Explanation: And again let's see the most recent record: End of explanation """
Jay-Jay-D/LeanSTP
Jupyter/KitchenSinkQuantBookTemplate.ipynb
apache-2.0
%matplotlib inline # Imports from clr import AddReference AddReference("System") AddReference("QuantConnect.Common") AddReference("QuantConnect.Jupyter") AddReference("QuantConnect.Indicators") from System import * from QuantConnect import * from QuantConnect.Data.Custom import * from QuantConnect.Data.Market import TradeBar, QuoteBar from QuantConnect.Jupyter import * from QuantConnect.Indicators import * from datetime import datetime, timedelta import matplotlib.pyplot as plt import pandas as pd # Create an instance qb = QuantBook() """ Explanation: Welcome to The QuantConnect Research Page Refer to this page for documentation https://www.quantconnect.com/docs#Introduction-to-Jupyter Contribute to this template file https://github.com/QuantConnect/Lean/blob/master/Jupyter/BasicQuantBookTemplate.ipynb QuantBook Basics Start QuantBook Add the references and imports Create a QuantBook instance End of explanation """ spy = qb.AddEquity("SPY") eur = qb.AddForex("EURUSD") btc = qb.AddCrypto("BTCUSD") fxv = qb.AddData[FxcmVolume]("EURUSD_Vol", Resolution.Hour) """ Explanation: Selecting Asset Data Checkout the QuantConnect docs to learn how to select asset data. End of explanation """ # Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution h1 = qb.History(qb.Securities.Keys, 360, Resolution.Daily) # Plot closing prices from "SPY" h1.loc["SPY"]["close"].plot() # Gets historical data from the subscribed assets, from the last 30 days with daily resolution h2 = qb.History(qb.Securities.Keys, timedelta(360), Resolution.Daily) # Plot high prices from "EURUSD" h2.loc["EURUSD"]["high"].plot() # Gets historical data from the subscribed assets, between two dates with daily resolution h3 = qb.History([btc.Symbol], datetime(2014,1,1), datetime.now(), Resolution.Daily) # Plot closing prices from "BTCUSD" h3.loc["BTCUSD"]["close"].plot() # Only fetchs historical data from a desired symbol h4 = qb.History([spy.Symbol], 360, Resolution.Daily) # or qb.History(["SPY"], 360, Resolution.Daily) # Only fetchs historical data from a desired symbol h5 = qb.History([eur.Symbol], timedelta(360), Resolution.Daily) # or qb.History(["EURUSD"], timedelta(30), Resolution.Daily) # Fetchs custom data h6 = qb.History([fxv.Symbol], timedelta(360)) h6.loc[fxv.Symbol.Value]["volume"].plot() """ Explanation: Historical Data Requests We can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol. For more information, please follow the link. End of explanation """ goog = qb.AddOption("GOOG") goog.SetFilter(-2, 2, timedelta(0), timedelta(180)) option_history = qb.GetOptionHistory(goog.Symbol, datetime(2017, 1, 4)) print (option_history.GetStrikes()) print (option_history.GetExpiryDates()) h7 = option_history.GetAllData() """ Explanation: Historical Options Data Requests Select the option data Sets the filter, otherwise the default will be used SetFilter(-1, 1, timedelta(0), timedelta(35)) Get the OptionHistory, an object that has information about the historical options data End of explanation """ es = qb.AddFuture("ES") es.SetFilter(timedelta(0), timedelta(180)) future_history = qb.GetFutureHistory(es.Symbol, datetime(2017, 1, 4)) print (future_history.GetExpiryDates()) h7 = future_history.GetAllData() """ Explanation: Historical Future Data Requests Select the future data Sets the filter, otherwise the default will be used SetFilter(timedelta(0), timedelta(35)) Get the FutureHistory, an object that has information about the historical future data End of explanation """ data = qb.GetFundamental(["AAPL","AIG","BAC","GOOG","IBM"], "ValuationRatios.PERatio") data """ Explanation: Get Fundamental Data GetFundamental([symbol], selector, start_date = datetime(1998,1,1), end_date = datetime.now()) We will get a pandas.DataFrame with fundamental data. End of explanation """ # Example with BB, it is a datapoint indicator # Define the indicator bb = BollingerBands(30, 2) # Gets historical data of indicator bbdf = qb.Indicator(bb, "SPY", 360, Resolution.Daily) # drop undesired fields bbdf = bbdf.drop('standarddeviation', 1) # Plot bbdf.plot() # For EURUSD bbdf = qb.Indicator(bb, "EURUSD", 360, Resolution.Daily) bbdf = bbdf.drop('standarddeviation', 1) bbdf.plot() # Example with ADX, it is a bar indicator adx = AverageDirectionalIndex("adx", 14) adxdf = qb.Indicator(adx, "SPY", 360, Resolution.Daily) adxdf.plot() # For EURUSD adxdf = qb.Indicator(adx, "EURUSD", 360, Resolution.Daily) adxdf.plot() # Example with ADO, it is a tradebar indicator (requires volume in its calculation) ado = AccumulationDistributionOscillator("ado", 5, 30) adodf = qb.Indicator(ado, "SPY", 360, Resolution.Daily) adodf.plot() # For EURUSD. # Uncomment to check that this SHOULD fail, since Forex is data type is not TradeBar. # adodf = qb.Indicator(ado, "EURUSD", 360, Resolution.Daily) # adodf.plot() # SMA cross: symbol = "EURUSD" # Get History hist = qb.History([symbol], 500, Resolution.Daily) # Get the fast moving average fast = qb.Indicator(SimpleMovingAverage(50), symbol, 500, Resolution.Daily) # Get the fast moving average slow = qb.Indicator(SimpleMovingAverage(200), symbol, 500, Resolution.Daily) # Remove undesired columns and rename others fast = fast.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'fast'}) slow = slow.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'slow'}) # Concatenate the information and plot df = pd.concat([hist.loc[symbol]["close"], fast, slow], axis=1).dropna(axis=0) df.plot() # Get indicator defining a lookback period in terms of timedelta ema1 = qb.Indicator(ExponentialMovingAverage(50), "SPY", timedelta(100), Resolution.Daily) # Get indicator defining a start and end date ema2 = qb.Indicator(ExponentialMovingAverage(50), "SPY", datetime(2016,1,1), datetime(2016,10,1), Resolution.Daily) ema = pd.concat([ema1, ema2], axis=1) ema.plot() rsi = RelativeStrengthIndex(14) # Selects which field we want to use in our indicator (default is Field.Close) rsihi = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.High) rsilo = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.Low) rsihi = rsihi.rename(columns={'relativestrengthindex': 'high'}) rsilo = rsilo.rename(columns={'relativestrengthindex': 'low'}) rsi = pd.concat([rsihi['high'], rsilo['low']], axis=1) rsi.plot() """ Explanation: Indicators We can easily get the indicator of a given symbol with QuantBook. For all indicators, please checkout QuantConnect Indicators Reference Table End of explanation """
anilcs13m/Projects
MovieReviewSentimentAnalysis/MovieReveiw/.ipynb_checkpoints/NLP_Movies-checkpoint.ipynb
gpl-2.0
import re from bs4 import BeautifulSoup """ Explanation: Natural Language Processing in a Kaggle Competition: Movie Reviews <img src='Movie_thtr.jpg', width = 800, height = 600> Source I decided to try playing around with a Kaggle competition. In this case, I entered the "When bag of words meets bags of popcorn" contest. This contest isn't for money; it is just a way to learn about various machine learning approaches. The competition was trying to showcase Google's Word2Vec. This essentially uses deep learning to find features in text that can be used to help in classification tasks. Specifically, in the case of this contest, the goal involves labeling the sentiment of a movie review from IMDB. Ratings were on a 10 point scale, and any review of 7 or greater was considered a positive movie review. Originally, I was going to try out Word2Vec and train it on unlabeled reviews, but then one of the competitors pointed out that you could simply use a less complicated classifier to do this and still get a good result. I decided to take this basic inspiration and try a few various classifiers to see what I could come up with. The highest my score received was 6th place back in December of 2014, but then people started using ensemble methods to combine various models together and get a perfect score after a lot of fine tuning with the parameters of the ensemble weights. Hopefully, this notebook will help you understand some basic NLP (Natural Language Processing) techniques, along with some tips on using scikit-learn to make your classification models. Cleaning the Reviews The first thing we need to do is create a simple function that will clean the reviews into a format we can use. We just want the raw text, not all of the other associated HTML, symbols, or other junk. We will need a couple of very nice libraries for this task: BeautifulSoup for taking care of anything HTML related and re for regular expressions. End of explanation """ def review_to_wordlist(review): ''' Meant for converting each of the IMDB reviews into a list of words. ''' # First remove the HTML. review_text = BeautifulSoup(review).get_text() # Use regular expressions to only include words. review_text = re.sub("[^a-zA-Z]"," ", review_text) # Convert words to lower case and split them into separate words. words = review_text.lower().split() # Return a list of words return(words) """ Explanation: Now set up our function. This will clean all of the reviews for us. End of explanation """ import pandas as pd train = pd.read_csv('labeledTrainData.tsv', header=0, delimiter="\t", quoting=3) test = pd.read_csv('testData.tsv', header=0, delimiter="\t", quoting=3 ) # Import both the training and test data. """ Explanation: Great! Now it is time to go ahead and load our data in. For this, pandas is definitely the library of choice. If you want to follow along with a downloaded version of the notebook yourself, make sure you obtain the data from Kaggle. You will need a Kaggle account in order to access it. End of explanation """ y_train = train['sentiment'] """ Explanation: Now it is time to get the labels from the training set for our reviews. That way, we can teach our classifier which reviews are positive vs. negative. End of explanation """ traindata = [] for i in xrange(0,len(train['review'])): traindata.append(" ".join(review_to_wordlist(train['review'][i]))) testdata = [] for i in xrange(0,len(test['review'])): testdata.append(" ".join(review_to_wordlist(test['review'][i]))) """ Explanation: Now we need to clean both the train and test data to get it ready for the next part of our program. End of explanation """ from sklearn.feature_extraction.text import TfidfVectorizer as TFIV tfv = TFIV(min_df=3, max_features=None, strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}', ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1, stop_words = 'english') """ Explanation: TF-IDF Vectorization The next thing we are going to do is make TF-IDF (term frequency-interdocument frequency) vectors of our reviews. In case you are not familiar with what this is doing, essentially we are going to evaluate how often a certain term occurs in a review, but normalize this somewhat by how many reviews a certain term also occurs in. Wikipedia has an explanation that is sufficient if you want further information. This can be a great technique for helping to determine which words (or ngrams of words) will make good features to classify a review as positive or negative. To do this, we are going to use the TFIDF vectorizer from scikit-learn. Then, decide what settings to use. The documentation for the TFIDF class is available here. In the case of the example code on Kaggle, they decided to remove all stop words, along with ngrams up to a size of two (you could use more but this will require a LOT of memory, so be careful which settings you use!) End of explanation """ X_all = traindata + testdata # Combine both to fit the TFIDF vectorization. lentrain = len(traindata) tfv.fit(X_all) # This is the slow part! X_all = tfv.transform(X_all) X = X_all[:lentrain] # Separate back into training and test sets. X_test = X_all[lentrain:] """ Explanation: Now that we have the vectorization object, we need to run this on all of the data (both training and testing) to make sure it is applied to both datasets. This could take some time on your computer! End of explanation """ X.shape """ Explanation: Making Our Classifiers Because we are working with text data, and we just made feature vectors of every word (that isn't a stop word of course) in all of the reviews, we are going to have sparse matrices to deal with that are quite large in size. Just to show you what I mean, let's examine the shape of our training set. End of explanation """ from sklearn.linear_model import LogisticRegression as LR from sklearn.grid_search import GridSearchCV grid_values = {'C':[30]} # Decide which settings you want for the grid search. model_LR = GridSearchCV(LR(penalty = 'L2', dual = True, random_state = 0), grid_values, scoring = 'roc_auc', cv = 20) # Try to set the scoring on what the contest is asking for. # The contest says scoring is for area under the ROC curve, so use this. model_LR.fit(X,y_train) # Fit the model. """ Explanation: That means we have 25,000 training examples (or rows) and 309,798 features (or columns). We need something that is going to be somewhat computationally efficient given how many features we have. Using something like a random forest to classify would be unwieldy (plus random forests can't work with sparse matrices anyway yet in scikit-learn). That means we need something lightweight and fast that scales to many dimensions well. Some possible candidates are: Naive Bayes Logistic Regression SGD Classifier (utilizes Stochastic Gradient Descent for much faster runtime) Let's just try all three as submissions to Kaggle and see how they perform. First up: Logistic Regression (see the scikit-learn documentation here). While in theory L1 regularization should work well because p>>n (many more features than training examples), I actually found through a lot of testing that L2 regularization got better results. You could set up your own trials using scikit-learn's built-in GridSearch class, which makes things a lot easier to try. I found through my testing that using a parameter C of 30 got the best results. End of explanation """ model_LR.grid_scores_ model_LR.best_estimator_ """ Explanation: You can investigate which parameters did the best and what scores they received by looking at the model_LR object. End of explanation """ from sklearn.naive_bayes import MultinomialNB as MNB model_NB = MNB() model_NB.fit(X, y_train) """ Explanation: Feel free, if you have an interactive version of the notebook, to play around with various settings inside the grid_values object to optimize your ROC_AUC score. Otherwise, let's move on to the next classifier, Naive Bayes. Unlike Logistic Regression, Naive Bayes doesn't have a regularization parameter to tune. You just have to choose which "flavor" of Naive Bayes to use. According to the documentation on Naive Bayes from scikit-learn, Multinomial is our best version to use, since we no longer have just a 1 or 0 for a word feature: it has been normalized by TF-IDF, so our values will be BETWEEN 0 and 1 (most of the time, although having a few TF-IDF scores exceed 1 is technically possible). If we were just looking at word occurrence vectors (with no counting), Bernoulli would have been a better fit since it is based on binary values. Let's make our Multinomial Naive Bayes object, and train it. End of explanation """ from sklearn.cross_validation import cross_val_score import numpy as np print "20 Fold CV Score for Multinomial Naive Bayes: ", np.mean(cross_val_score (model_NB, X, y_train, cv=20, scoring='roc_auc')) # This will give us a 20-fold cross validation score that looks at ROC_AUC so we can compare with Logistic Regression. """ Explanation: Pretty fast, right? This speed comes at a price, however. Naive Bayes assumes all of your features are ENTIRELY independent from each other. In the case of word vectors, that seems like a somewhat reasonable assumption but with the ngrams we included that probably isn't always the case. Because of this, Naive Bayes tends to be less accurate than other classification algorithms, especially if you have a smaller number of training examples. Why don't we see how Naive Bayes does (at least in a 20 fold CV comparison) so we have a rough idea of how well it performs compared to our Logistic Regression classifier? You could use GridSearch again, but that seems like overkill. There is a simpler method we can import from scikit-learn for this task. End of explanation """ from sklearn.linear_model import SGDClassifier as SGD sgd_params = {'alpha': [0.00006, 0.00007, 0.00008, 0.0001, 0.0005]} # Regularization parameter model_SGD = GridSearchCV(SGD(random_state = 0, shuffle = True, loss = 'modified_huber'), sgd_params, scoring = 'roc_auc', cv = 20) # Find out which regularization parameter works the best. model_SGD.fit(X, y_train) # Fit the model. """ Explanation: Well, it wasn't quite as good as our well-tuned Logistic Regression classifier, but that is a pretty good score considering how little we had to do! One last classifier to try is the SGD classifier, which comes in handy when you need speed on a really large number of training examples/features. Which machine learning algorithm it ends up using depends on what you set for the loss function. If we chose loss = 'log', it would essentially be identical to our previous logistic regression model. We want to try something different, but we also want a loss option that includes probabilities. We need those probabilities if we are going to be able to calculate the area under a ROC curve. Looking at the documentation, it seems a 'modified_huber' loss would do the trick! This will be a Support Vector Machine that uses a linear kernel. End of explanation """ model_SGD.grid_scores_ """ Explanation: Again, similar to the Logistic Regression model, we can see which parameter did the best. End of explanation """ LR_result = model_LR.predict_proba(X_test)[:,1] # We only need the probabilities that the movie review was a 7 or greater. LR_output = pd.DataFrame(data={"id":test["id"], "sentiment":LR_result}) # Create our dataframe that will be written. LR_output.to_csv('Logistic_Reg_Proj2.csv', index=False, quoting=3) # Get the .csv file we will submit to Kaggle. """ Explanation: Looks like this beat our previous Logistic Regression model by a very small amount. Now that we have our three models, we can work on submitting our final scores in the proper format. It was found that submitting predicted probabilities of each score instead of the final predicted score worked better for evaluation from the contest participants, so we want to output this instead. First, do our Logistic Regression submission. End of explanation """ # Repeat this for Multinomial Naive Bayes MNB_result = model_NB.predict_proba(X_test)[:,1] MNB_output = pd.DataFrame(data={"id":test["id"], "sentiment":MNB_result}) MNB_output.to_csv('MNB_Proj2.csv', index = False, quoting = 3) # Last, do the Stochastic Gradient Descent model with modified Huber loss. SGD_result = model_SGD.predict_proba(X_test)[:,1] SGD_output = pd.DataFrame(data={"id":test["id"], "sentiment":SGD_result}) SGD_output.to_csv('SGD_Proj2.csv', index = False, quoting = 3) """ Explanation: Repeat this with the other two. End of explanation """
sassoftware/sas-viya-programming
communities/Your First CAS Connection from Python.ipynb
apache-2.0
# Import the SWAT package which contains the CAS interface import swat # Create a CAS session on mycas1 port 12345 conn = swat.CAS('mycas1', 12345, 'username', 'password') """ Explanation: Your First CAS Connection from Python Let's start with a gentle introduction to the Python CAS client by doing some basic operations like creating a CAS connection and running a simple action. You'll need to have Python installed as well as the SWAT Python package from SAS, and you'll need a running CAS server. We will be using Python 3 for our example. Specifically, we will be using the IPython interactive prompt (type 'ipython' rather than 'python' at your command prompt). The first thing we need to do is import SWAT and create a CAS session. We will use the name 'mycas1' for our CAS hostname and 12345 as our CAS port name. In this case, we will use username/password authentication, but other authentication mechanisms are also possible depending on your configuration. End of explanation """ # Run the builtins.listnodes action nodes = conn.listnodes() nodes """ Explanation: As you can see above, we have a session on the server. It has been assigned a unique session ID and more user-friendly name. In this case, we are using the binary CAS protocol as opposed to the REST interface. We can now run CAS actions in the session. Let's begin with a simple one: listnodes. End of explanation """ # Grab the nodelist DataFrame df = nodes['nodelist'] df """ Explanation: The listnodes action returns a CASResults object (which is just a subclass of Python's ordered dictionary). It contains one key ('nodelist') which holds a Pandas DataFrame. We can now grab that DataFrame to do further operations on it. End of explanation """ roles = df[['name', 'role']] roles # Extract the worker nodes using a DataFrame mask roles[roles.role == 'worker'] # Extract the controllers using a DataFrame mask roles[roles.role == 'controller'] """ Explanation: Use DataFrame selection to subset the columns. End of explanation """ conn.close() """ Explanation: In the code above, we are doing some standard DataFrame operations using expressions to filter the DataFrame to include only worker nodes or controller nodes. Pandas DataFrames support lots of ways of slicing and dicing your data. If you aren't familiar with them, you'll want to get acquainted on the Pandas web site. When you are finished with a CAS session, it's always a good idea to clean up. End of explanation """
mgalardini/2017_python_course
notebooks/[3]-Exercises.ipynb
gpl-2.0
# Import the packages that will be usefull for this part of the lesson from collections import OrderedDict, Counter import pandas as pd from pprint import pprint # Small trick to get a larger display from IPython.core.display import display, HTML display(HTML("<style>.container { width:90% !important; }</style>")) """ Explanation: Note: solutions of the exercices of this lesson are at the end of this notebook Imports Importing the package you will need on the top of your notebook is a good programming practice End of explanation """ from random import choice disaster_list = ["Global_pandemic", "Brexit", "US_presidential_elections", "Nuclear_war", "Asteroide_storm", "End of_antibiotics_era"] choice(disaster_list) """ Explanation: Reminder on file parsing strategy Read the first line of the file and try to understand the structure and determine the length of the file If the file is a standard genomic/proteomic format, read the documentation Think about the most efficient way to parse the file to get the information you want How are you going to access the field(s) of interest ? (you can test that with 1 line before starting with the whole file) A real life file will contain millions of lines and file reading is usually slow. Try to read the file only 1 time, even if you need to parse multiple element per line. How are you going to collect the information (dictionary, list, dataframe...) ? ... Now you can parse the file Counter Exercise 1: Randomly selected global event with catastrophic consequences End of explanation """ file = "../data/US_election_vote_sample.txt" """ Explanation: The file indicated bellow contain a representative sample of popular votes for the last US presidential elections. Parse the file and return a count of the number of occurrence of each name End of explanation """ file = "../data/gencode_sample.gff3" ! head -2 ../data/gencode_sample.gff3 # check format of GFF file # rearrange the lines below to create a working solution type_field = line.split('\t')[2] type_counter = Counter() print('%s:\t%d' % count_info) with open(file, 'r') as fh: for line in fh.readlines(): for count_info in type_counter.most_common(): from collections import Counter type_counter[type_field] += 1 """ Explanation: Exercise 2: Parsing a gene annotation file (gff3) Parse The following gff3 gene annotation file. Extract the type of each line and print the count ordered by descending occurrence gff3 is a standard genomic format. Read the documentation here. End of explanation """ file = "../data/gencode_sample.gff3" ! head -2 ../data/gencode_sample.gff3 # fill in the blanks (---) in the code below to create a working solution from collections import --- sequences = OrderedDict() with open(file, ---) as fh: --- line in fh.readlines(): fields = line.split() seqid = fields[---] attributes = --- ID = attributes.split(';')[0] if --- in sequences: sequences[seqid].append(ID) else: sequences[seqid] = [ID] for seq, ids in sequences.items(): print('%s:\t%s' % (seq, ', '.join(ids))) """ Explanation: OrderedDict Exercise 3: Parse The following gff3 gene annotation file. For each seqid (chromosome) list the sequences ID associated. Use an ordered dict to preserve the original chromosome ordering from the file Example: d={"chr1": ["stop_codon:ENST00000370551.8", "UTR5:ENST00000235933.10", ...], "chr2": ["ID=CDS:ENST00000504764.5", "ID=UTR5:ENST00000480736.1", ...], ... } End of explanation """ # Import the uniform method (which is also a generator by the way) to generate random floats from random import uniform def DNA_generator (): # Eventually you can have options for relative nucleotide frequencies # Calculate cummulative frequencies to avoid having to do it each time in the loop --- --- --- --- # Iterate indefinitly... until a nuclear apocalypse at least. while True: # Generate a random float frequency between 0 and max freq freq = uniform (0, ---) # Depending on the random frequency return the approriate base if --- : yield "A" elif ---: yield "T" elif ---: yield "C" else: yield "G" """ Explanation: Generator Exercise 4: 1) Write a generator that can ouput DNA bases with the following frequencies A/T = 0.19 and C/G = 0.31 End of explanation """ d = DNA_generator() --- """ Explanation: 2) Using this generator to generate a 100nt sequence (as a string) End of explanation """ file = "../data/sample_alignment.sam" import pandas as pd df = pd.read_table(file, comment='@', header=None) """ Explanation: Statistics and viewing Exercise 5: Parse The following sam file. sam is a standard genomic format. Read the documentation here. It can be read as a table with pandas. End of explanation """ file = "../data/abundance.tsv" """ Explanation: Which of the following code blocks will: 1) Print the 10 last rows? ```Python a) df.head(10) b) df.tail(10) c) df.last10() d) df[6, 7, 8, 9, 10, 11, 12, 13, 14, 15] ``` 2) Sample randomly 10 rows and compute the mean and median fragment length (TLEN)? ```Python a) sample = df.sample(10) meanTLEN = sample[1].mean() medianTLEN = sample[1].median() b) sample = df.sample(10) meanTLEN = sample[8].mean medianTLEN = sample[8].median c) sample = df.sample(10) meanTLEN = sample[8].mean() medianTLEN = sample[8].median() ``` 3) Generate a summary for all the columns? ```Python a) pd.describe(df) b) df.describe(include='all') c) summarise(pd.df) d) df.summarise('columns') ``` Selection and indexing Exercise 6: Parse the following count file obtained by Kallisto from an RNAseq Dataset. The file is not a standard genomics format, but it is a tabulated file and some information can be found in Kallisto documentation. Extract the following target_id: 'ENST00000487368.4', 'ENST00000623229.1', 'ENST00000444276.1', 'ENST00000612487.4', 'ENST00000556673.2', 'ENST00000623191.1' Select only the est_counts and tpm columns and print the first 10 lines of the table Extract of the rows with a tpm* value higher that 10000 * Extract of the rows with a tpm and an est_counts higher than 1000, order by descending eff_len and print the 10 first lines. End of explanation """ abundance_file = "../data/abundance.tsv") gsym_to_tid_file = "../data/gene_symbol_to_transcript_id.tsv" df1 = df1.head() df2 = df2.head() df3 = """ Explanation: Merging dataframes Exercise 7: 1) Create a dataframe associating the abundance data from the abundance_file and the gene symbols from gene_to_transcript_file. Do not include transcripts without gene symbols in the list Sort the transcript_id Reset the index Discard the target_id column End of explanation """ file = "../data/codon_usage_bias_human.tsv" """ Explanation: Mixing Pandas and generators Exercise 7: Create a random codon generator based on the known codon usage bias in the human genome. The following file contains a list of all the human codon codons, their AA traduction, their count in the human genome and their frequency/1000: "../data/codon_usage_bias_human.tsv" To do so, you can parse the file in a pandas DataFrame. Pandas has a very useful method to sample a line from the Dataframe. Use the weights option to sample according to a probability weighting. End of explanation """ c = Counter() # Open the file with open ("../data/US_election_vote_sample.txt", "r") as fp: for candidate in fp: # Increment the counter for the current element c[candidate]+=1 # Order by most frequent element c.most_common() """ Explanation: . . . . . . . . . . . . . . . . . . . . . . . . . . . . POSSIBLE ANSWERS Exercise 1 End of explanation """ file = "../data/gencode_sample.gff3" c = Counter() # Open the file with open (file, "r") as fp: # Iterate over lines for line in fp: # Split the line and get the element 3 feature_type = line.split("\t")[2] # Increment the counter c[feature_type]+=1 # Order by most frequent element c.most_common() """ Explanation: Exercise 2 End of explanation """ !head -n 1 "../data/gencode_sample.gff3" file = "../data/gencode_sample.gff3" d = OrderedDict() # Open the file with open (file, "r") as fp: # Iterate over lines for line in fp: # Split the line and get the element 3 seqid = line.split("\t")[0] # Parse the line to get the ID ID = line.split("\t")[8].split(";")[0][3:] # if not seqid in d: d[seqid] = [] d[seqid].append(ID) d """ Explanation: Exercise 3 End of explanation """ from random import uniform def DNA_generator (A_freq, T_freq, C_freq, G_freq): # Customizable frequency argument # Calculate cummulative frequencies to avoid having to do it each time in the loop cum_A_freq = A_freq cum_T_freq = A_freq+T_freq cum_C_freq = A_freq+T_freq+C_freq cum_G_freq = A_freq+T_freq+C_freq+G_freq # Iterate indefinitly while True: # Generate a random float frequency between 0 and max freq freq = uniform (0, cum_G_freq) # Depending on the random frequency return the approriate base if freq <= cum_A_freq: yield "A" elif freq <= cum_T_freq: yield "T" elif freq <= cum_C_freq: yield "C" else: yield "G" # achieve the same using random.choices # if using python v<3.6, import choices from numpy.random from random import choices def DNA_generator_choices(weights=[0.19, 0.31, 0.31, 0.19], letters=['A', 'C', 'G', 'T']): while True: yield choices(population=letters, weights=weights, k=1)[0] # Create the generator with the required frequencies d = DNA_generator(A_freq=0.19, T_freq=0.19, C_freq=0.31, G_freq=0.31) # Test the generator print(next(d), next(d), next(d), next(d), next(d), next(d), next(d), next(d)) # Create the generator with the required frequencies d = DNA_generator_choices() # Test the generator print(next(d), next(d), next(d), next(d), next(d), next(d), next(d), next(d)) """ Explanation: Exercise 4 )1 End of explanation """ # iterative str contruction with a loop seq="" for _ in range (100): seq += next(d) seq # Same with a one liner list comprehension seq = "".join([next(d) for _ in range (100)]) seq """ Explanation: )2 End of explanation """ file = "../data/sample_alignment.sam" columns_names = ['QNAME', 'FLAG', 'RNAME', 'POS', 'MAPQ', 'CIGAR', 'RNEXT', 'PNEXT', 'TLEN', 'SEQ', 'QUAL'] df = pd.read_table(file, sep="\t", names = columns_names, skiprows=[0,1], index_col=0) df.tail(10) tlen_sample = df.sample(10).TLEN print (tlen_sample) print ("\nMean:", tlen_sample.mean()) print ("\nMedian:", tlen_sample.median()) df.describe(include="all") """ Explanation: Exercise 5 End of explanation """ file = "../data/abundance.tsv" df = pd.read_table(file, index_col=0) df.loc[['ENST00000487368.4', 'ENST00000623229.1', 'ENST00000444276.1', 'ENST00000612487.4', 'ENST00000556673.2', 'ENST00000623191.1']] df[["est_counts", "tpm"]].head(10) df[(df.tpm > 10000)] df = df[(df.est_counts > 1000) & (df.tpm > 1000)] df = df.sort_values("eff_length") df.head(10) """ Explanation: Exercise 6 End of explanation """ df1 = pd.read_table("../data/abundance.tsv") df2 = pd.read_table("../data/gene_symbol_to_transcript_id.tsv", names=["transcript_id", "gene_symbol"]) df3 = pd.merge(left=df1, right=df2, left_on="target_id", right_on="transcript_id", how="inner") df3 = df3.sort_values("transcript_id") df3 = df3.reset_index(drop=True) df3.drop(["target_id"], axis=1) df3.head() """ Explanation: Exercise 7 End of explanation """ print ("\x47\x6f\x6f\x64 \x4c\x75\x63\x6b") """ Explanation: Exercise 8 End of explanation """
Olsthoorn/TransientGroundwaterFlow
exercises_notebooks/FirstExercise.ipynb
gpl-3.0
import numpy as np # import numerical package and locally call it np import matplotlib.pyplot as plt from scipy.special import k0 # import the K0-bessel function import scipy # import scientific package K0 = scipy.special.k0 # assign variable name K0 to function k0 # this makes K0 and k0 point at the same memory location where the # code for the besselfunction is located. The function can be # evoqued by putting parenthesis behing the function name k0() # Functions that need arguments have to be given them like k0(1.2) a = 1.23 # crate a name a and assign a value 1.23 to it # that is: store the value 1.23 somewhere in the memory of the computer and # generate a name a, and let the name a be a pointer that poitns at the store # value in memory (this is what happens when a value is assigned to a new # valriable) # now we can use both k0 and K0 for the same thing print( k0(a)) print( K0(a)) print( scipy.special.k0(a)) """ Explanation: Transient course, first notebook exercise This notebook shows the notebook, python and a little bit of groundwater It's meant to be a getting accustomed with notebooks in Python for groundwater and anything else. Delft, December 22, 2016 There are many tutorials on any aspect of Python and notebooks, see the Internet and also YouTube. Ok, let's go: This cell is a markdown cell, that is a text cell in which formatting can be done using mardown instructions. There is a tutorial under the Help menu at the top of this file. Markdown is like this and like this and like the headers at the top of this file. The amazing thing is that you can also write mathematical formulas that will be typeset professionally once the cell is executed by pressing SHIFT-ENTER together. Some examples are given below. To see how it works op a cell by clicking on it. Executing an open cell by pressing SHIFT ENTER. For help see under Help in the menu above. Als look at the keyboard shortcuts under help. They are useful to getting fast with notebook. Formulas are TeX (LaTex) Mathematical formulas can be entered as TeX (LaTex) the language use worldwide for entering math in documents. It's easy as you will see. There is codes for greek letters ($\alpha, \beta, \dots$), special things like the square root $\sqrt{\dots}$, fractions $\frac Q {2 \pi kD}$, the integral $\intop_a^\infty\dots dx$, the sum $\sum_{i=0}^\infty(\dots)$, superscipt $a^b$, subscript $a_b$, derivatives ${\frac {\partial^2 \phi} {\partial x^2}}$ and so on. It's extremely effective. If you look for a workprocessor that naturally incorporate math formulas and delivers beautifully formatted documents without you doing effort for it, go for LyX. It's free. It never chrashes, and you can de math by hand in it, by directly manipulating your formulas without ever making errors. It en angineer and scientist's favorate, because it reduces the need to learn LaTex to an absolute minimum. And what's more, you can directly copy text including the formulas into these notebooks. Greek letters Lower case $$ \alpha \beta \gamma \delta \epsilon \eta \sigma \tau \zeta \xi \theta \pi \lambda \phi \psi \omega $$ Upper case $$ \Gamma \Delta \Sigma \Xi \Theta \Pi \Lambda \Phi \Psi \Omega $$ Not all upercase work, because some of the uppercase letters such as Alpha an Beta are just A and B as in Latin. Some formulas Math formulas are either $in-line$ or $$outstanding$$ As can be seen, the in-line formulas are placed between single dollars and the outstanding ones between double dollars. Words with special meaning start with a "\" (back-slash) like $$ \sum, \intop, \sqrt{a \times b \times c} \,\,\,\, \frac {a} {b} $$ Underscore and upperscore go like this $$ s_x a^{2\sqrt{2 \pi r}} \intop_{-\infty}^\infty \sum_{n=1}^N $$ Functions like exp, ln, sin, cos, tan etc. will be put in the correct font if precended by a "\" $$ \exp(2 \pi i \theta) \ln \frac r \lambda \sin (2 \pi \alpha) \cos( \ln \frac {3 k} {5 \sqrt{\sigma}}) $$ $$ \ln \left( \frac {2 \pi \zeta} {\exp(2 i)} \right) $$ You get nice big parentheses, absolute and brackets like this $$ s = \left| \left[ \frac Q {2 \pi kD} \ln \left( \frac r \lambda \right) \right] \right|_{i=1}^{k=\infty} $$ Great, smaller, approximate etc like this $$ a \le b \gg c \ge d \approx \Gamma $$ Limits, and arrows, see also the hat on pi: $$ \lim_{x\rightarrow 0} K_0 \left( \frac r \lambda \right) = \hat {\pi} $$ Some other examples: $$ s(r) = \frac {Q} {4 \pi kD} K_0 \frac {r} {\lambda} $$ $$ y = \intop_{-\infty}^\infty \frac 1 y dy $$ In short,everything is possible and not that difficult. This way of generating fomumlas is called TeX or LateX. Our first code Below is our first cell with code. First things first: We start with importing the python modules that we will always use when doing engineering work with Python. End of explanation """ # use comments (i.e. text after "#" to tell the dimension and what it is) Q = 1200. # m3/d """ a dot behing the number like 2. or 2.0 tells that the number is a float and not an integer. Sometimes this is important mostly not, because python will convert the integer to a floating point if the context demands is """ kD = 900. # m2/d r = 150. # m R = 400. # m c = 350. # d lam = np.sqrt(kD * c) # m. spreading lenght of semi-confined aquifer r = np.logspace(-2, 4, 41) # generate a series of distance values # To show r print(r) print('\nHere are the values of r:\n',r,'\n...Done.\n') print('"\\n" means a new line, therfore:\n\n\n','were tree new lines') s1 = Q / (2 * np.pi * kD) * np.log(R /r) # drawdownin confined aquifer s2 = Q / (2 * np.pi * kD) * k0(r / lam) # drawdown in semi-confine aquifer # approximation of head in semiconfined aquifer s3 = Q / (2 * np.pi * kD) * np.log(1.123 * lam / r) fig = plt.figure() ax = fig.add_subplot(111) # one plot on the current axes ax.set(xlabel='r [m]', ylabel='head [m]', xscale='log', title='Three steady state drawdown lines') ax.plot(r, s1, 'r', linewidth=3, label='confined') ax.plot(r, s2, 'g-', label='semi-confined') ax.plot(r, s3, 'x', label='semi-confined, approximation') ax.grid(True) # also plot grid lines ax.invert_yaxis() # turn y-axis upside down ax.legend(loc='best', fontsize='small') plt.show() # don't forget plt.show() """ Explanation: Let's compute steady state groundwater drawdown due to pumping a well in 1. a confined aquifer with fixed circular bounary at R=400 m 1. a semi-confined aquifer where the covering toplayer has a resistance c d End of explanation """ print(range(2, 5, 2)) for i in range(2, 11, 2): print(i) # The numpy way: print('\nfrom array :\n',np.arange(2, 5, 0.3)) print('\nfrom linspace:\n', np.linspace(2, 5, 11)) print('\nfrom logspace:\n', np.logspace(-1, 1, 11)) print('\nfrom zeros :\n', np.zeros((3,4))) print('\nfrom ones : \n', np.ones((4,3))) """ Explanation: It can be seen that the approximation is good over a large range of r (the approximation (crosses) overlap the exact values (green line)). The red line is also a straight line on semi-log graph, but its absolute level depends on the distance R of the circular bounary where the head is fixed at 0. It was taken at R=400 m. If you make R=1.123 * lam, then you'll see that the red line matches the other two. To generate series of numbers we can use range([start,], end [,interval]) # gives a list not an array np.arange([start,] end, interval) # gives an array np.linspace(start, end, number_of_points) # gives an array np.logspace(log10(start), log10(end), number__of_points) # array np.ones((Ny[,Nx[,Nz] ...]) gives an array of 1's np.zeros(Ny[,Nx[,Nz] ...]) gives an array of 0's Examples End of explanation """
with-git/tensorflow
tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb
apache-2.0
from __future__ import print_function from IPython.display import Image import base64 Image(data=base64.decodestring("iVBORw0KGgoAAAANSUhEUgAAAMYAAABFCAYAAAARv5krAAAYl0lEQVR4Ae3dV4wc1bYG4D3YYJucc8455yCSSIYrBAi4EjriAZHECyAk3rAID1gCIXGRgIvASIQr8UTmgDA5imByPpicTcYGY+yrbx+tOUWpu2e6u7qnZ7qXVFPVVbv2Xutfce+q7hlasmTJktSAXrnn8vR/3/xXmnnadg1aTfxL3/7rwfSPmT+kf/7vf098YRtK+FnaZaf/SS++OjNNathufF9caiT2v/xxqbTGki/SXyM1nODXv/r8+7Tb+r+lnxZNcEFHEG/e3LnpoINXSh/PWzxCy/F9eWjOnDlLrr/++jR16tQakgylqdOWTZOGFqX5C/5IjXNLjdt7/NTvv/+eTjnllLT//vunr776Kl100UVpueWWq8n10lOmpSmTU5o/f0Fa3DDH1ry9p0/++eefaZ999slYYPS0005LK664Yk2eJ02ekqZNnZx+XzA/LfprYgGxePHitOqqq6YZM2akyfPmzUvXXXddHceoic2EOckxDj300CzPggUL0g033NC3OKy00krDer3pppv6FgcBIjvGUkv9u5paZZVVhoHpl4Mvv/wyhfxDQ0NZ7H7EQbacPHny39Tejzj88ccfacqUKRmHEecYf0Nr8GGAQJ8gMHCMPlH0QMzmEBg4RnN4DVr3CQIDx+gTRQ/EbA6BgWM0h9egdZ8g8PeliD4RutfF/Ouvfz9OtZy8aNGiNH/+/GGWl1122XzseYuVNKtqsaI23Ghw0DYCA8doG8JqO+AUG2+8cVq4cGHaY4890vLLL5/WXXfdfI6jvPDCC3lJ8amnnkoezP3000/pl19+GThHtWpIPekYomTxFS7HnkqKjMsss0yGgFE4r62tSBFVJ02aNPyconi9V4/JwzHwT9ZNNtkkeZ6w5ZZbph133DH99ttv6ccff8zXX3nllcRRnHNfv2cNGMQWGRaOrWbUrjsGBRLAA6U4Lhoqw9h2223ztRBq6aWXzsbgvueffz4Lu9NOO2UnYTgrr7xy7tO9nOH111/Pbb744ov0ww8/jAvngAdFMvQDDjggG/0GG2yQX1GZNm1aziCCwzrrrJPl3muvvXKwePnll9M333wzHDCKWPbLMbuAkfISjnvvvXcW/emnn85lqCBqa4a65hiYR/Gk2RNGRlwm3n7ggQfmdrKD9sqJtdZaKxvCnDlz8n3Tp09PXmPYeuutc0SVNQjvnmuvvTa3efzxx9N33303PGZ5rF75DBvvqq233nrp22+/TWeddVbyikpgxCE4vQDhlQUBRfDw2esbs2fPTquvvnqviNN1PuIdJ4GErVx44YUZowsuuCB9+umn6eeff84BspmsWqljhPFDxjGGYx/lDkN33udajCoVlAjRzl4U8LjefRwnPjsXG8OJqKBd8NB1LTU5IHyCd7LJGOYXNoGjFqaGIKtrERDIDKtukfGMH/zRZa1A101+YBF44KfMYzO8VOYYjDWiukiGqc022yyXOUqdzTffPJ/z1ialeqNVxA9gi0wzlOJ5juJlR8JeddVV+ZrIKTq4ZvJp/8EHH+SU+txzz+W2SqmxVFZRplrH5DTRXmGFFdKuu+6azjjjjOzosl5g6D54CQCI4mGjhNQO5occckh2LvLTA6fqJOEnyhU6kNlkZmUuvrtNcFx77bUzhsZWXgoSsm6t4Dsa/tp2DErCmA04HAI4FLjaaqtlBhmnSKiNY4rDtHZFB6jFMMH0RVDH+nCPYxtDCFJnKkniRbDitWjTK3sykQUuMLPn3DZGX8SFnCG/fVyz5zCCBtIHTLshdzif8fERn8cKXxjCNOwCTu3Qf6yqhV4AQokiP489//zzM0DxnQYKwqAtIkko1kQzFFxvaNcJ6u3Pe+65J/cRRvDee+9lA2BInIyRff/997nNO++8k7t0vl2A6vHWynmyiPJ43WKLLbIijz/++LTddtvlTCdzwIWSg9yjxBJ0GN/DDz+c7zv77LOzbEceeWSekwVGgsOsWbNyNo0+qt7DfPvtt8/dmtvIGnPnzk3PPPPMsJ6rHrNef/BBeJA90RprrJEDcNhctMkXR/mnbccwuCjNGTbaaKMc8TBZprITxOdgOvbuKxqGz6LSJ598kseJ9Gi1CYmSv/76a3YyJZWMZJ6Ceskp8EMusihFEAyUmVaa8G2rxTNHIrd733///eH7YeaLNe5xrEzlWNF/HqQDf0Tm+GIbvYdD43MsKAIo/JDgE0G5aFfN8NaWYxiUshikqGYTTUSt0TCkjXsYNqJQQso+rgGa0vX58ccf56hQTtk+48F92rmvlnE1A0on2uKP0Yrw+Nxzzz0zn+ZhjKwRXq6vueaa2TmUiRQfS7SyNeMks9IV9vrvJOl/q622yo4Mfw5Pvm6TMclLdit6shh+YAMnq1E29tEsteUYBgMSgxa5MOAzJZcVXQs4bUR8XxhCHIwzMALCBuCcx5q0tF3u133l8XrRMchFiRYNyMxBKM/5IjZlWVzjULKwACISytIWFsi56aab5mvOKyEikmdAO/iHY+BDCRUZuoPD1e1akECyLseA7d13352DhdKak8Cmlt3U7TSl9p58FwejYK8ncAwKpDTnGDcARbWiAUjHiNEHsITSPlagpEZChcfrZzwSOfBOiQwXLuR3PjAhtwAD08iAMCO/a+5xPTIm3ALjwERf0V+c69QeT7ZujVdLDhgKBrANXAMreMESRkU7rdVPrXNtZ4xIpSLH1VdfnR3j4IMPzkbw2Wefpa+//jovo5188slZsZjArAcvFP3YY4+lSy+9NEdTdTTy0I5xHHfccfm1CH2LtuORKEqmkwVlVU+sBY+IdJRmE0zeeOONnEXuu+++7AhnnnlmWn/99XMJ5brtzTffzHMJx/o555xzkgdb0U8rRtAKrnTYqtG1Ml6teyxInHDCCdlGYByBmG2Z97ChVvFo2zEwbHCRTbqP7EDxPjN2pUBEe86AXAcsg+f10TYMSTvnRM1ulQe1wG/nHEXZZEJZUIYQ5cgWMsEgMgqclFdkdh+MbFFyuddnWMLNfTYkcuuXHlBkpFYNI3dS+mMMfCHHsZWadfUjmQVn8iLywscG21apMscQwR555JEM3KuvvpoZ5LHOmzgjAvBwzFt2/Oijj3Lm4Ayin/MU/eGHH+b2N998c/5MGSaZ44nw7OEd5Rx77LE5+1EehYXxkpes5li2K6+8Mhv8Lrvsko381ltvzcEBfvHQKh5auk9GPvHEE3NJAx+/eKL/HXbYIQcbK3nwN067xAk4s5VHdbvsx0nxrYQeKxJMZAfBA7GlRx99NC9EtCN7JY4RoPBeAHIAyrB3jpHYwqu1d02d7HpZcfqINo5dL7eJMXtxTzk2sgWFM/gcsnCakI2cFOk+523O+Qw7WaeYHYpYRp9xn4BkbPdWSfgJXYYM+ne+2xRj2sdx8EDu8rm4Ntp9pY4RSmb0CIPOAVNGoLA47yU4S2xen37ppZdy9CkLE/3lm8bJHzJbbiavt2Q9p7AkK7oyXAZOLk7gs9c4PJC0AOE8DDyrgJkaWgYQkSPYuAdpWySfteU8HhqKouYq+io6ZfGeZo7xpbT1+jt+jGULfprpq922ePHMBibwjWVq523KVrzBsIzTaMeu1DFi0HI0YyyYtAekY5MltbRyihFJiROBKIYTwMCTWJNubwdQFCXFapK9z96mtbjgs3thFKWnUgjBzNZIya5FOyUcPG36q4LwRgZ6Ix8HtBk3tirGGU0feAkslHfk5PzBh2cXSkvtWqWOOEaRGcoSHdXDMoYn1tK8yaON0ahbCWgFS/vxSnjn5F4ItLeiFAGAzCKc7MDA1OlIjc4pLFKE7FEyxb5ZPNTbtuiv2fvrtddfOFsYXcwj8d8qv/XGq3femLvvvnvOvrIYPPEjG+PDseDbDnXcMXiyiGiyyACOPvrovN95552zV3/++ef5zVveznlEo6CICvG5l/d4JSvHP+qoo7JjKDs4PkVSGPm9HSz9W5rlPEoCQYHjVFXyRGnBOcKA28VOP/qTBWX6YnS2IKB8qYL/enyGHPbKziOOOCLj6sGeslGW8L6Y4ANr2MY99fpsdL7jjmFwkSTSr6gDVCk+tmDQedcJ5LgdwaLPbu7xjJRRNlErSsiQhVHJlOEQoh182o1wRTnharwYs3itnWP9Rd/RD5mLW5yveh/YRhYMjItyBh/wjPat8tEVx6B00RKo5513XpIl7rzzzuwEourMmTOz95uIcyBfTSXYiy++mCOrSFS1klsFrNZ9eGPoJtmeyRx00EE5cpGbIi21XnbZZbkMee2117KMHIKMIVcotVb/vXoOz6I0+URoMlVFcBFE7L1+IjNYIo6v/fo+D3tC+FCR+FHuwNUCgfOtUlccI5hnJMoIBhN1sBICqMoNNaLP3pkiFGciIIBC4HaEbRWk0dyHb3Mp/EY0I6+NsytvyKxsKhpQr8ozGpm1IZ8IbV+PyllGuyh1YBXXOQEcy6R8M5eAHzuxxX3GRvbaCKJ4aRfXrjkG5jEbk00Prxi8SZTJKmc5/PDDc5v99tsvC+hBjWtqStmD0F4Ma1foMvDtfqZMUc3/lYjMSFFW3NS7JtyyoKzSiTocHoFJHMc+MlK7Mta7n9NbATJerbEYvQWIWCVitIyaXrV3nsG7H2Y2GVcbxyj6NX+waKEPmOvbfShwtjhQDDz5Ygt/uuoY+OPtnICDEMBTWsAQUu0NBBsDEgFEWOADAiDaVRERWsCq5i34IRN+TbTJgn8KwzOFuR4KDUXW7Kyik53Ep8w/+RkxWeO5S1EM5wVABguXMGp69dk1x87D0ObdL32GHI5tsDQGHtwbm/Hw4TpnKvNY5Ge0x113DEwT3tIsIdSnDIfxcxJAevCHfE9cXcmotHXfAw88kIFUdgFjLMn4HuZRuh9FExmjRCCnZxRqcPxz8ioUVk9eRhJkPAYHV8ZVFRkjjFSfAtw222yTy2OZ0iv15fHcQ4dKaMcwsBdEEL26RzaIh5+yK7LSBGPno8yOZX+vzRhfXzZ8cRrtyzzkzpr803XHwB8wTJYIRol+VY8zqMMBbP0f+cExE1qTdbU7x3jwwQdzVBYdesExKNiEWx2MfwoOAyCbJ9uRHZvUTcPmsENhGNE4HBKOHKNqZzQu3KNfX9H1nRABQZlbNkpt4SNo4DWIIesDj9qYnwki2giWqol3330348kZLPm7xvi1Pffcc7MzhA3gy/0oeIuxWtmPiWNgNCIFYwcCAa2FA1ikJZz1aeUVsBmge9TyoqGoIqKUFdEKCFXcU0/pHJizVMUnXBiBh6IicdTTzsEOnuZkDE/2rcJI4KMf/TF+0TucwDhkZ+DGL4/nGkPGV/AIC+2RvfP6ZPTI4gu5XNM/Um7RPzuIFyn1zW7wpQ9UHj+fbOHPmDlGCOGBGIeQQfwuq0jnISBQfOHft7JEHN94Q5xF6XLFFVfkyKIEGyuiGAo3r6BIx0imcM6k+6GHHspOEQbcDq+UTl4BwRu7PstUiPEJFsa9/PLL83nXg6d2xnUvoxS5L7744uGyh/wyRpRF9YwSHsHjE088kWWADQeRFThZkTgBstensZG5h4m56oEdcAp9CwTOVUlj6hgECcGBpA6XDazeiLKhVABQAhKB3cNxbEAL4KoEppm+gjf3OMafDf+UW7zeTL/ltqIiAxBMOIIxnLOHgbFsMGQ4InhE0nJfrXw2hnIRD3SFBKmYWDfqE49woFvOzZno3NxM0HDciMjBDsjEBgLTsJHYN+qjmWtj7hjBLKFFQgL7qRz14jHHHJPBcC2M3wRPVDT5ohzZRv0Z16O/sdozAKmdopUH5kftTrzJpl+lk29CcgpLw3BgpMbwwqF/S80pGJ6xO0WM+8Ybbxw2TuOEoTYakwyovB/JKdzDMVQOHvCRzXju890fL11aGhcMqqIxdwwCRkYQDZAaE7lWBhyosQEmQM439MgffDHm0Si8EcuBC0ezcQSZVKYktzFEW+3sfQ4natRvu9eMTS9F7IvHo+m/2fb6LNuCc0WsW+mzHq9j6hgE9YCHp5tkez2EAVjlMOmyUlU2Lis8ygVR0rykyoltPZCaOY9fr32Qp50X6xi7pWCGbsHBvwLgGIcddljGxvcsjOU1GseyiKjJQWydpiqNsBlei85BfhNxeJunVCl31x0jBOMAjJ9jRC3OEERDS7QMI0qQohIYgLSq7FJuMZbi9WZA7kRbvFAWx5Dyy449mjEDG/dyDPW4VSiy2iNvBcCSUdxyyy35OYHrqJUx843j8I/qQpA074BVVdR1x+AIHCIiIGewsqIuds41tSSlOxeOFHuOQ/E+2zPEuFYVKM32U3RMvGy44YbZMTg2B2+GOIXXJcjpR9lkUy/QyZ7GUU8zAD9RCiuR0oQYVv1IMAk7qFL+rjkGg7GZQPLufffdN69QKJtkCAKKjNGu1p7gMgWDYEDRpkpAmu0rnMLehie/RavcI49Sr1ZW0w6V91ac/IsxmdHPB0U5pQ+4+TExDudNUhPufnaKIn7N6m2k9h11jKLRqP+UQJb2eHh4uYjK0LW1D0MpCq0NR4g24RTR/0hCdvM6/m14FtljeTL4D/liedFeO7LYcyh7eMGDY8X16IM8Vp9kWjj2GwWG5IZb2FKVOHTMMTCvDKBgD2Z22223bNynnnpqVrZXBFxjQDZUFJiwIqKHN8qHO+64IxvN/fffn9vG/VWC0UpfeC5uZMEbg/ctM/8SzYOxZ599Nhs4ebSx0ECpcDFvMCdRggkesoQ+zaHU0N4EgAEnue2227JTON+LgaEVDFu5h+w2Wdl33GFkEUIQqYIqdYwwbJGO8q2xOydqUiTFWpJVPzsuUwhlzzFETxlGdFSCqaMB4XwvUzgKWU3AyW4uwFns4QMbilUyxbq8p/4cw3UEB8FDGQUDx/acqB8zRS2dw5qthe3VatPKucocg6JiYu3lP2nfawvekKVITzgJQLH24QTBtPZeE2D89957b27jwZ1IwIm8R2OMWHmJ+3pxTzaK8l+HyMrgTzrppMxqOIEsGoZvz0nsyWiliRMUl2G9aOk6POyLZVUvYtBpniL4wA1m9lVSW46BOQqKpTLK9FnUsxftvW4swssa4dkhCGFCMNfcp08lhM9KKc4h0obgsa8ShHb6Cv5DJnu8IwHB9TB852DkOlzIRV6kXbSVMfQj48BWdhE0TLr1Fe3zQR/+gRMK5yjuq4KjZccQ2SlYjexHmCnSkiLjtsesmlnpQ5naFo1A5GMAHoJxBI709ttv54ygntZWmWEcQMS9VQleRT9kNmfAG0P3HRPGbHnVudg4gEyJOAYiE0wikHAAcxHyxndO4KI/WHEK/Qzo7wjAXfaFNdurikaNtIERRTqmYIYdE2tGEs8hfJ8iFB/3xV67MCjG8NZbb6Unn3wyC+XfDxfnDxFp496qhK6qn5CDA5twK/fIRH5Gb0MMOhxCFgkKjOBoHqKEkmWvueaanG04iTHcP3CKQO0/e3ZhgceP2smqcKyKRuUYlEKhPDL+d5z1c4qVFTDnmBIZMwZ9DiKAzTmvCetPNFR7W7fXXt/KLddqTcyjr17bRybkEF5XiQhPHnMuDlF07MCB3I49l4EDxTrnfsFBJBxQbQSKeGoROqjdurWzIzoGJqRxS2KUf/rpp2flcRDRjRKVCdpFhCwz7rOVKE5z++235/7uuuuuXDq5P5yKEY0np8B3TKb9K1/vLTF0/7MiJtyRPYrq4fx+7R2e7vFDDzDyfx1goPwcUGMEYG/rFI3oGAYW0UUyimQIcRwGzbgpVsZAUTYE065xCtc5GUeSHTyg4kzKs/FKoSBljyhvTz6y2gseZAwlwgI+cNBGtpV9ZRj4BobjFY9O8g0bQcXWaRpxBE5hHuFnJ0XB6dOn56ge2QGDlK2dFSSG4b8kxVzEdSWGVxgYQLzrxJkIGgbTaUE73b9MZ/KNfIMOJpdcckndYZWmFAwv+wgydW/o8wsCK3xnz56dFzx8oxPGtk7QiI5h0FBaeGzRKYIpjDN2ig6lB9OiprmI60qNieIMIXvsQy7yotjH9eI+2hbPDY4bI8D+2JdnWTYY+iwDs78qaUTHEM0sI1pClAVMnqX9ImGQszB6DHoNOLzZNZlGRlEq9JNB9JOsRXvoxDGnsDTudwFUHTNmzMjDqEaU9xYvGgWiZnka0TEo16CeNyCM1SLtwmt5cNEoCOUa5xjQAIFWEGBP5rbKdTRr1qwcfGUMthXVTCt917pnRMdwE6ZiQm0JckADBMYCgWLwtXjTSeq/d5Y7ieag7wmDwMAxJowqB4JUicDAMapEc9DXhEFgcjxcM7vvR4on7bHS1q84WNkpUr/iEL+aOLRw4cIlQCmuIhUBmsjHlpQ9c7EmzjEsN1vd6DeCg8UVT+qRd7b6EQey8wMT+6El8RSu36xhIO8AgQYI9F94bADG4NIAgUDg/wHX+3lgThDIegAAAABJRU5ErkJggg==".encode('utf-8')), embed=True) """ Explanation: MNIST from scratch This notebook walks through an example of training a TensorFlow model to do digit classification using the MNIST data set. MNIST is a labeled set of images of handwritten digits. An example follows. End of explanation """ import os from six.moves.urllib.request import urlretrieve SOURCE_URL = 'https://storage.googleapis.com/cvdf-datasets/mnist/' WORK_DIRECTORY = "/tmp/mnist-data" def maybe_download(filename): """A helper to download the data files if not present.""" if not os.path.exists(WORK_DIRECTORY): os.mkdir(WORK_DIRECTORY) filepath = os.path.join(WORK_DIRECTORY, filename) if not os.path.exists(filepath): filepath, _ = urlretrieve(SOURCE_URL + filename, filepath) statinfo = os.stat(filepath) print('Successfully downloaded', filename, statinfo.st_size, 'bytes.') else: print('Already downloaded', filename) return filepath train_data_filename = maybe_download('train-images-idx3-ubyte.gz') train_labels_filename = maybe_download('train-labels-idx1-ubyte.gz') test_data_filename = maybe_download('t10k-images-idx3-ubyte.gz') test_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz') """ Explanation: We're going to be building a model that recognizes these digits as 5, 0, and 4. Imports and input data We'll proceed in steps, beginning with importing and inspecting the MNIST data. This doesn't have anything to do with TensorFlow in particular -- we're just downloading the data archive. End of explanation """ import gzip, binascii, struct, numpy import matplotlib.pyplot as plt with gzip.open(test_data_filename) as f: # Print the header fields. for field in ['magic number', 'image count', 'rows', 'columns']: # struct.unpack reads the binary data provided by f.read. # The format string '>i' decodes a big-endian integer, which # is the encoding of the data. print(field, struct.unpack('>i', f.read(4))[0]) # Read the first 28x28 set of pixel values. # Each pixel is one byte, [0, 255], a uint8. buf = f.read(28 * 28) image = numpy.frombuffer(buf, dtype=numpy.uint8) # Print the first few values of image. print('First 10 pixels:', image[:10]) """ Explanation: Working with the images Now we have the files, but the format requires a bit of pre-processing before we can work with it. The data is gzipped, requiring us to decompress it. And, each of the images are grayscale-encoded with values from [0, 255]; we'll normalize these to [-0.5, 0.5]. Let's try to unpack the data using the documented format: [offset] [type] [value] [description] 0000 32 bit integer 0x00000803(2051) magic number 0004 32 bit integer 60000 number of images 0008 32 bit integer 28 number of rows 0012 32 bit integer 28 number of columns 0016 unsigned byte ?? pixel 0017 unsigned byte ?? pixel ........ xxxx unsigned byte ?? pixel Pixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black). We'll start by reading the first image from the test data as a sanity check. End of explanation """ %matplotlib inline # We'll show the image and its pixel value histogram side-by-side. _, (ax1, ax2) = plt.subplots(1, 2) # To interpret the values as a 28x28 image, we need to reshape # the numpy array, which is one dimensional. ax1.imshow(image.reshape(28, 28), cmap=plt.cm.Greys); ax2.hist(image, bins=20, range=[0,255]); """ Explanation: The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0. We could print all 28 * 28 values, but what we really need to do to make sure we're reading our data properly is look at an image. End of explanation """ # Let's convert the uint8 image to 32 bit floats and rescale # the values to be centered around 0, between [-0.5, 0.5]. # # We again plot the image and histogram to check that we # haven't mangled the data. scaled = image.astype(numpy.float32) scaled = (scaled - (255 / 2.0)) / 255 _, (ax1, ax2) = plt.subplots(1, 2) ax1.imshow(scaled.reshape(28, 28), cmap=plt.cm.Greys); ax2.hist(scaled, bins=20, range=[-0.5, 0.5]); """ Explanation: The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between. Both the image and histogram look sensible. But, it's good practice when training image models to normalize values to be centered around 0. We'll do that next. The normalization code is fairly short, and it may be tempting to assume we haven't made mistakes, but we'll double-check by looking at the rendered input and histogram again. Malformed inputs are a surprisingly common source of errors when developing new models. End of explanation """ with gzip.open(test_labels_filename) as f: # Print the header fields. for field in ['magic number', 'label count']: print(field, struct.unpack('>i', f.read(4))[0]) print('First label:', struct.unpack('B', f.read(1))[0]) """ Explanation: Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5]. Reading the labels Let's next unpack the test label data. The format here is similar: a magic number followed by a count followed by the labels as uint8 values. In more detail: [offset] [type] [value] [description] 0000 32 bit integer 0x00000801(2049) magic number (MSB first) 0004 32 bit integer 10000 number of items 0008 unsigned byte ?? label 0009 unsigned byte ?? label ........ xxxx unsigned byte ?? label As with the image data, let's read the first test set value to sanity check our input path. We'll expect a 7. End of explanation """ IMAGE_SIZE = 28 PIXEL_DEPTH = 255 def extract_data(filename, num_images): """Extract the images into a 4D tensor [image index, y, x, channels]. For MNIST data, the number of channels is always 1. Values are rescaled from [0, 255] down to [-0.5, 0.5]. """ print('Extracting', filename) with gzip.open(filename) as bytestream: # Skip the magic number and dimensions; we know these values. bytestream.read(16) buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images) data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32) data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, 1) return data train_data = extract_data(train_data_filename, 60000) test_data = extract_data(test_data_filename, 10000) """ Explanation: Indeed, the first label of the test set is 7. Forming the training, testing, and validation data sets Now that we understand how to read a single element, we can read a much larger set that we'll use for training, testing, and validation. Image data The code below is a generalization of our prototyping above that reads the entire test and training data set. End of explanation """ print('Training data shape', train_data.shape) _, (ax1, ax2) = plt.subplots(1, 2) ax1.imshow(train_data[0].reshape(28, 28), cmap=plt.cm.Greys); ax2.imshow(train_data[1].reshape(28, 28), cmap=plt.cm.Greys); """ Explanation: A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1. Let's make sure we've got the reshaping parameters right by inspecting the dimensions and the first two images. (Again, mangled input is a very common source of errors.) End of explanation """ NUM_LABELS = 10 def extract_labels(filename, num_images): """Extract the labels into a 1-hot matrix [image index, label index].""" print('Extracting', filename) with gzip.open(filename) as bytestream: # Skip the magic number and count; we know these values. bytestream.read(8) buf = bytestream.read(1 * num_images) labels = numpy.frombuffer(buf, dtype=numpy.uint8) # Convert to dense 1-hot representation. return (numpy.arange(NUM_LABELS) == labels[:, None]).astype(numpy.float32) train_labels = extract_labels(train_labels_filename, 60000) test_labels = extract_labels(test_labels_filename, 10000) """ Explanation: Looks good. Now we know how to index our full set of training and test images. Label data Let's move on to loading the full set of labels. As is typical in classification problems, we'll convert our input labels into a 1-hot encoding over a length 10 vector corresponding to 10 digits. The vector [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], for example, would correspond to the digit 1. End of explanation """ print('Training labels shape', train_labels.shape) print('First label vector', train_labels[0]) print('Second label vector', train_labels[1]) """ Explanation: As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations. End of explanation """ VALIDATION_SIZE = 5000 validation_data = train_data[:VALIDATION_SIZE, :, :, :] validation_labels = train_labels[:VALIDATION_SIZE] train_data = train_data[VALIDATION_SIZE:, :, :, :] train_labels = train_labels[VALIDATION_SIZE:] train_size = train_labels.shape[0] print('Validation shape', validation_data.shape) print('Train size', train_size) """ Explanation: The 1-hot encoding looks reasonable. Segmenting data into training, test, and validation The final step in preparing our data is to split it into three sets: training, test, and validation. This isn't the format of the original data set, so we'll take a small slice of the training data and treat that as our validation set. End of explanation """ import tensorflow as tf # We'll bundle groups of examples during training for efficiency. # This defines the size of the batch. BATCH_SIZE = 60 # We have only one channel in our grayscale images. NUM_CHANNELS = 1 # The random seed that defines initialization. SEED = 42 # This is where training samples and labels are fed to the graph. # These placeholder nodes will be fed a batch of training data at each # training step, which we'll write once we define the graph structure. train_data_node = tf.placeholder( tf.float32, shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS)) train_labels_node = tf.placeholder(tf.float32, shape=(BATCH_SIZE, NUM_LABELS)) # For the validation and test data, we'll just hold the entire dataset in # one constant node. validation_data_node = tf.constant(validation_data) test_data_node = tf.constant(test_data) # The variables below hold all the trainable weights. For each, the # parameter defines how the variables will be initialized. conv1_weights = tf.Variable( tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32. stddev=0.1, seed=SEED)) conv1_biases = tf.Variable(tf.zeros([32])) conv2_weights = tf.Variable( tf.truncated_normal([5, 5, 32, 64], stddev=0.1, seed=SEED)) conv2_biases = tf.Variable(tf.constant(0.1, shape=[64])) fc1_weights = tf.Variable( # fully connected, depth 512. tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512], stddev=0.1, seed=SEED)) fc1_biases = tf.Variable(tf.constant(0.1, shape=[512])) fc2_weights = tf.Variable( tf.truncated_normal([512, NUM_LABELS], stddev=0.1, seed=SEED)) fc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_LABELS])) print('Done') """ Explanation: Defining the model Now that we've prepared our data, we're ready to define our model. The comments describe the architecture, which fairly typical of models that process image data. The raw input passes through several convolution and max pooling layers with rectified linear activations before several fully connected layers and a softmax loss for predicting the output class. During training, we use dropout. We'll separate our model definition into three steps: Defining the variables that will hold the trainable weights. Defining the basic model graph structure described above. And, Stamping out several copies of the model graph for training, testing, and validation. We'll start with the variables. End of explanation """ def model(data, train=False): """The Model definition.""" # 2D convolution, with 'SAME' padding (i.e. the output feature map has # the same size as the input). Note that {strides} is a 4D array whose # shape matches the data layout: [image index, y, x, depth]. conv = tf.nn.conv2d(data, conv1_weights, strides=[1, 1, 1, 1], padding='SAME') # Bias and rectified linear non-linearity. relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases)) # Max pooling. The kernel size spec ksize also follows the layout of # the data. Here we have a pooling window of 2, and a stride of 2. pool = tf.nn.max_pool(relu, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') conv = tf.nn.conv2d(pool, conv2_weights, strides=[1, 1, 1, 1], padding='SAME') relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases)) pool = tf.nn.max_pool(relu, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # Reshape the feature map cuboid into a 2D matrix to feed it to the # fully connected layers. pool_shape = pool.get_shape().as_list() reshape = tf.reshape( pool, [pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]]) # Fully connected layer. Note that the '+' operation automatically # broadcasts the biases. hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases) # Add a 50% dropout during training only. Dropout also scales # activations such that no rescaling is needed at evaluation time. if train: hidden = tf.nn.dropout(hidden, 0.5, seed=SEED) return tf.matmul(hidden, fc2_weights) + fc2_biases print('Done') """ Explanation: Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph. We'll define a helper to do this, model, which will return copies of the graph suitable for training and testing. Note the train argument, which controls whether or not dropout is used in the hidden layer. (We want to use dropout only during training.) End of explanation """ # Training computation: logits + cross-entropy loss. logits = model(train_data_node, True) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( labels=train_labels_node, logits=logits)) # L2 regularization for the fully connected parameters. regularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) + tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases)) # Add the regularization term to the loss. loss += 5e-4 * regularizers # Optimizer: set up a variable that's incremented once per batch and # controls the learning rate decay. batch = tf.Variable(0) # Decay once per epoch, using an exponential schedule starting at 0.01. learning_rate = tf.train.exponential_decay( 0.01, # Base learning rate. batch * BATCH_SIZE, # Current index into the dataset. train_size, # Decay step. 0.95, # Decay rate. staircase=True) # Use simple momentum for the optimization. optimizer = tf.train.MomentumOptimizer(learning_rate, 0.9).minimize(loss, global_step=batch) # Predictions for the minibatch, validation set and test set. train_prediction = tf.nn.softmax(logits) # We'll compute them only once in a while by calling their {eval()} method. validation_prediction = tf.nn.softmax(model(validation_data_node)) test_prediction = tf.nn.softmax(model(test_data_node)) print('Done') """ Explanation: Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation. Here, we'll do some customizations depending on which graph we're constructing. train_prediction holds the training graph, for which we use cross-entropy loss and weight regularization. We'll adjust the learning rate during training -- that's handled by the exponential_decay operation, which is itself an argument to the MomentumOptimizer that performs the actual training. The validation and prediction graphs are much simpler to generate -- we need only create copies of the model with the validation and test inputs and a softmax classifier as the output. End of explanation """ # Create a new interactive session that we'll use in # subsequent code cells. s = tf.InteractiveSession() # Use our newly created session as the default for # subsequent operations. s.as_default() # Initialize all the variables we defined above. tf.global_variables_initializer().run() """ Explanation: Training and visualizing results Now that we have the training, test, and validation graphs, we're ready to actually go through the training loop and periodically evaluate loss and error. All of these operations take place in the context of a session. In Python, we'd write something like: with tf.Session() as s: ...training / test / evaluation loop... But, here, we'll want to keep the session open so we can poke at values as we work out the details of training. The TensorFlow API includes a function for this, InteractiveSession. We'll start by creating a session and initializing the varibles we defined above. End of explanation """ BATCH_SIZE = 60 # Grab the first BATCH_SIZE examples and labels. batch_data = train_data[:BATCH_SIZE, :, :, :] batch_labels = train_labels[:BATCH_SIZE] # This dictionary maps the batch data (as a numpy array) to the # node in the graph it should be fed to. feed_dict = {train_data_node: batch_data, train_labels_node: batch_labels} # Run the graph and fetch some of the nodes. _, l, lr, predictions = s.run( [optimizer, loss, learning_rate, train_prediction], feed_dict=feed_dict) print('Done') """ Explanation: Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example. End of explanation """ print(predictions[0]) """ Explanation: Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities. End of explanation """ # The highest probability in the first entry. print('First prediction', numpy.argmax(predictions[0])) # But, predictions is actually a list of BATCH_SIZE probability vectors. print(predictions.shape) # So, we'll take the highest probability for each vector. print('All predictions', numpy.argmax(predictions, 1)) """ Explanation: As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels. End of explanation """ print('Batch labels', numpy.argmax(batch_labels, 1)) """ Explanation: Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class. End of explanation """ correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(batch_labels, 1)) total = predictions.shape[0] print(float(correct) / float(total)) confusions = numpy.zeros([10, 10], numpy.float32) bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(batch_labels, 1)) for predicted, actual in bundled: confusions[predicted, actual] += 1 plt.grid(False) plt.xticks(numpy.arange(NUM_LABELS)) plt.yticks(numpy.arange(NUM_LABELS)) plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest'); """ Explanation: Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch. End of explanation """ def error_rate(predictions, labels): """Return the error rate and confusions.""" correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(labels, 1)) total = predictions.shape[0] error = 100.0 - (100 * float(correct) / float(total)) confusions = numpy.zeros([10, 10], numpy.float32) bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(labels, 1)) for predicted, actual in bundled: confusions[predicted, actual] += 1 return error, confusions print('Done') """ Explanation: Now let's wrap this up into our scoring function. End of explanation """ # Train over the first 1/4th of our training set. steps = train_size // BATCH_SIZE for step in range(steps): # Compute the offset of the current minibatch in the data. # Note that we could use better randomization across epochs. offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE) batch_data = train_data[offset:(offset + BATCH_SIZE), :, :, :] batch_labels = train_labels[offset:(offset + BATCH_SIZE)] # This dictionary maps the batch data (as a numpy array) to the # node in the graph it should be fed to. feed_dict = {train_data_node: batch_data, train_labels_node: batch_labels} # Run the graph and fetch some of the nodes. _, l, lr, predictions = s.run( [optimizer, loss, learning_rate, train_prediction], feed_dict=feed_dict) # Print out the loss periodically. if step % 100 == 0: error, _ = error_rate(predictions, batch_labels) print('Step %d of %d' % (step, steps)) print('Mini-batch loss: %.5f Error: %.5f Learning rate: %.5f' % (l, error, lr)) print('Validation error: %.1f%%' % error_rate( validation_prediction.eval(), validation_labels)[0]) """ Explanation: We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically. Here, we want to iterate over the entire data set rather than just the first batch, so we'll need to slice the data to that end. (One pass through our training set will take some time on a CPU, so be patient if you are executing this notebook.) End of explanation """ test_error, confusions = error_rate(test_prediction.eval(), test_labels) print('Test error: %.1f%%' % test_error) plt.xlabel('Actual') plt.ylabel('Predicted') plt.grid(False) plt.xticks(numpy.arange(NUM_LABELS)) plt.yticks(numpy.arange(NUM_LABELS)) plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest'); for i, cas in enumerate(confusions): for j, count in enumerate(cas): if count > 0: xoff = .07 * len(str(count)) plt.text(j-xoff, i+.2, int(count), fontsize=9, color='white') """ Explanation: The error seems to have gone down. Let's evaluate the results using the test set. To help identify rare mispredictions, we'll include the raw count of each (prediction, label) pair in the confusion matrix. End of explanation """ plt.xticks(numpy.arange(NUM_LABELS)) plt.hist(numpy.argmax(test_labels, 1)); """ Explanation: We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'. Let's do another sanity check to make sure this matches roughly the distribution of our test set, e.g., it seems like we have fewer '5' values. End of explanation """
chinmaymk/machine-learning-experiments
03-adult-income-by-census.ipynb
mit
def read_data(path): return pd.read_csv(path, index_col=False, skipinitialspace=True, names=['age', 'workclass', 'fnlwgt', 'education', 'education_num', 'marital_status', 'occupation', 'relationship', 'race', 'sex', 'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'income'] ) train = read_data('./data/adult/adult.data') test = read_data('./data/adult/adult.test') train = train.append(test) train.head() train.hist(figsize=(12, 9)) """ Explanation: Data exploration End of explanation """ # for column in train.select_dtypes(['object']).columns: # train[column] = train[column].astype('category') ## Check for duplicates, nulls train.drop_duplicates(inplace=True) train.dropna(inplace=True) print any(train.duplicated()) print train.isnull().any() """ Explanation: age, education_num, hours_per_week, fnlwgt seem like good candidates as features. Not much information in capital_gain, capital_loss. Some routine stuff Convert objects to categories Drop duplicates Drop NA's - we can potentially impute these values. But always try out the simpler alternative before making it too complicated :) End of explanation """ train.income.loc[train.income == '>50K.'] = '>50K' train.income.loc[train.income == '<=50K.'] = '<=50K' train.income.value_counts() """ Explanation: Let's clean some data End of explanation """ education_subset = train.groupby(['education_num', 'income']).size().reset_index() education_subset.columns = ['education_num', 'income', 'count'] func = lambda x: float(x['count']) / train[train.education_num == x.education_num].count()[0] education_subset['percentage'] = education_subset.apply(func, axis=1) education_subset['education + income'] = education_subset.apply(lambda x: '%s, %s' % (x.education_num, x.income), axis=1) education_subset.sort().plot(kind='barh', x='education + income', y='percentage', figsize=(12,12)) """ Explanation: Intuition 1: Higher education should result in more income. End of explanation """ train.groupby('income').hist(figsize=(15,12)) """ Explanation: Above plot shows percentage of population with respect to education and income, and it seems people with Masters and PhD tend to earn to more (more number of people are in >50K bucket). Intuition 2: People earn more as they get more experience. End of explanation """ from sklearn.preprocessing import LabelEncoder, OneHotEncoder lencoder = LabelEncoder() oencoder = OneHotEncoder() features = pd.DataFrame() features['age'] = train['age'] features['education_num'] = train['education_num'] features['hours_per_week'] = train['hours_per_week'] features['fnlwgt'] = train['fnlwgt'] features['sex'] = lencoder.fit_transform(train.sex) features['occupation'] = lencoder.fit_transform(train.occupation) features.income = train.income features.income = lencoder.fit_transform(features.income) features.head() """ Explanation: First plot shows distribution of age with respect to income <= 50K. Age is used as an proxy to experience. Assumption here is people continue to work as they age and acquire more skills in the process. As per intuition, number of people making less than 50K decreases as per age. Second plot shows income > 50K. More interestingly, data shows a peak around 45. This indicates either there aren't enough poeple of age 45+ earning more than 50K in the data or income decreases as people approach retirement. Feature construction End of explanation """ from sklearn.ensemble import RandomForestClassifier from sklearn.cross_validation import train_test_split x_train, x_test, y_train, y_test = train_test_split(features.drop('income'), features.income) model = RandomForestClassifier() model.fit(x_train, y_train) y_hat = model.predict(x_test) """ Explanation: Model fitting End of explanation """ from sklearn.metrics import confusion_matrix, accuracy_score accuracy_score(y_test, y_hat) confusion_matrix(y_test, y_hat) """ Explanation: Model/Feature Evaluation End of explanation """
Kaggle/learntools
notebooks/data_viz_to_coder/raw/tut3.ipynb
apache-2.0
#$HIDE$ import pandas as pd pd.plotting.register_matplotlib_converters() import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns print("Setup Complete") """ Explanation: Now that you can create your own line charts, it's time to learn about more chart types! By the way, if this is your first experience with writing code in Python, you should be very proud of all that you have accomplished so far, because it's never easy to learn a completely new skill! If you stick with the course, you'll notice that everything will only get easier (while the charts you'll build will get more impressive!), since the code is pretty similar for all of the charts. Like any skill, coding becomes natural over time, and with repetition. In this tutorial, you'll learn about bar charts and heatmaps. Set up the notebook As always, we begin by setting up the coding environment. (This code is hidden, but you can un-hide it by clicking on the "Code" button immediately below this text, on the right.) End of explanation """ # Path of the file to read flight_filepath = "../input/flight_delays.csv" # Read the file into a variable flight_data flight_data = pd.read_csv(flight_filepath, index_col="Month") """ Explanation: Select a dataset In this tutorial, we'll work with a dataset from the US Department of Transportation that tracks flight delays. Opening this CSV file in Excel shows a row for each month (where 1 = January, 2 = February, etc) and a column for each airline code. Each entry shows the average arrival delay (in minutes) for a different airline and month (all in year 2015). Negative entries denote flights that (on average) tended to arrive early. For instance, the average American Airlines flight (airline code: AA) in January arrived roughly 7 minutes late, and the average Alaska Airlines flight (airline code: AS) in April arrived roughly 3 minutes early. Load the data As before, we load the dataset using the pd.read_csv command. End of explanation """ # Print the data flight_data """ Explanation: You may notice that the code is slightly shorter than what we used in the previous tutorial. In this case, since the row labels (from the 'Month' column) don't correspond to dates, we don't add parse_dates=True in the parentheses. But, we keep the first two pieces of text as before, to provide both: - the filepath for the dataset (in this case, flight_filepath), and - the name of the column that will be used to index the rows (in this case, index_col="Month"). Examine the data Since the dataset is small, we can easily print all of its contents. This is done by writing a single line of code with just the name of the dataset. End of explanation """ # Set the width and height of the figure plt.figure(figsize=(10,6)) # Add title plt.title("Average Arrival Delay for Spirit Airlines Flights, by Month") # Bar chart showing average arrival delay for Spirit Airlines flights by month sns.barplot(x=flight_data.index, y=flight_data['NK']) # Add label for vertical axis plt.ylabel("Arrival delay (in minutes)") """ Explanation: Bar chart Say we'd like to create a bar chart showing the average arrival delay for Spirit Airlines (airline code: NK) flights, by month. End of explanation """ # Set the width and height of the figure plt.figure(figsize=(14,7)) # Add title plt.title("Average Arrival Delay for Each Airline, by Month") # Heatmap showing average arrival delay for each airline by month sns.heatmap(data=flight_data, annot=True) # Add label for horizontal axis plt.xlabel("Airline") """ Explanation: The commands for customizing the text (title and vertical axis label) and size of the figure are familiar from the previous tutorial. The code that creates the bar chart is new: ```python Bar chart showing average arrival delay for Spirit Airlines flights by month sns.barplot(x=flight_data.index, y=flight_data['NK']) `` It has three main components: -sns.barplot- This tells the notebook that we want to create a bar chart. - _Remember thatsnsrefers to the [seaborn](https://seaborn.pydata.org/) package, and all of the commands that you use to create charts in this course will start with this prefix._ -x=flight_data.index- This determines what to use on the horizontal axis. In this case, we have selected the column that **_index_**es the rows (in this case, the column containing the months). -y=flight_data['NK']- This sets the column in the data that will be used to determine the height of each bar. In this case, we select the'NK'` column. Important Note: You must select the indexing column with flight_data.index, and it is not possible to use flight_data['Month'] (which will return an error). This is because when we loaded the dataset, the "Month" column was used to index the rows. We always have to use this special notation to select the indexing column. Heatmap We have one more plot type to learn about: heatmaps! In the code cell below, we create a heatmap to quickly visualize patterns in flight_data. Each cell is color-coded according to its corresponding value. End of explanation """
kit-cel/wt
qc/basic_concepts_Python.ipynb
gpl-2.0
# defining lists sport_list = [ 'cycling', 'football', 'fitness' ] first_prime_numbers = [ 2, 3, 5, 7, 11, 13, 17, 19 ] # getting contents sport = sport_list[ 2 ] third_prime = first_prime_numbers[ 2 ] # printing print( 'All sports:', sport_list ) print( 'Sport to be done:', sport ) print( '\nFirst primes:', first_prime_numbers ) print( 'Third prime number:', third_prime ) # adapt entries and append new entries sport_list[ 1 ] = 'swimming' sport_list.append( 'running' ) first_prime_numbers.append( 23 ) # printing print( 'All sports:', sport_list ) print( 'First primes:', first_prime_numbers ) """ Explanation: Contents and Objective Describing several commands and methods that will be used throughout the simulations <b>Note:</b> Basic knowledge of programming languages and concepts is assumed. Only specific concepts that are different from, e.g., C++ or Matlab, are provided. <b>NOTE 2:</b> The following summary is by no means complete or exhaustive, but only provides a short and simplified overview of the commands used throughout the simulations in the lecture. For a detailed introduction please have a look at one of the numerous web-tutorials or books on Python, e.g., https://www.python-kurs.eu/ https://link.springer.com/book/10.1007%2F978-1-4842-4246-9 https://primo.bibliothek.kit.edu/primo_library/libweb/action/search.do?mode=Basic&vid=KIT&vl%28freeText0%29=python&vl%28freeText0%29=python&fn=search&tab=kit&srt=date Cell Types There are two types of cells: Text cells (called 'Markdown'): containing text, allowing use of LaTeX Math/code cells: where code is being executed As long as you are just reading the simulations, there is no need to be concerned about this fact. Data Structures In the following sections the basic data structures used in upcoming simulations will be introduced. Basic types as int, float, string are supposed to be well-known. Lists Container-type structure for collecting entities (which may even be of different type) Defined by key word list( ) or by square brackets with entities being separated by comma Referenced by index in square brackets; <b>Note</b>: indexing starting at 0 Entries may be changed, appended, sliced,... End of explanation """ # defining tuple sport_tuple = ( 'cycling', 'football', 'fitness' ) # getting contents sport = sport_tuple[ 2 ] # printing print( 'All sports:', sport_tuple ) print( 'Sport to be done:', sport ) # append new entries sport_tuple += ( 'running', ) # printing print( 'All sports:', sport_tuple ) print() # changing entries will fail # --> ERROR is being generated on purpose # --> NOTE: Error is handled by 'try: ... except: ...' statement try: sport_tuple[ 1 ] = 'swimming' except: print('ERROR: Entries within tuples cannot be adapted!') """ Explanation: Tuples Similar to lists but "immutable", i.e., entries can be appended, but not be changed Defined by tuple( ) or by brackets with entities being separated by comma Referenced by index in square brackets; <b>Note</b>: indexing starting at 0 End of explanation """ # defining dictionaries sports_days = { 'Monday': 'pause', 'Tuesday': 'fitness', 'Wednesday' : 'running', 'Thursday' : 'fitness', 'Friday' : 'swimming', 'Saturday' : 'cycling', 'Sunday' : 'cycling' } print( 'Sport by day:', sports_days ) print( '\nOn Tuesday:', sports_days[ 'Tuesday' ]) # Changes are made by using the key as identifier sports_days[ 'Tuesday' ] = 'running' print( 'Sport by day:', sports_days ) """ Explanation: Dictionaries Container in which entries are of type: ( key : value ) Defined by key word dict or by curly brackets with entities of shape "key : value" being separated by comma Referenced by key in square brackets --> <b>Note</b>: Indexing by keys instead of indices might be a major advantage (at least sometimes) End of explanation """ # defining sets sports_set = { 'fitness', 'running', 'swimming', 'cycling'} print( sports_set ) print() # indexing will fail # --> ERROR is being generated on purpose try: print( sports_set[0] ) except: print('ERROR: No indexing of sets!') # adding elements (or not) sports_set.add( 'pause' ) print(sports_set) sports_set.add( 'fitness' ) print(sports_set) # union of sets (also: intersection, complement, ...) all_stuff_set = set( sports_set ) union_of_sets = all_stuff_set.union( first_prime_numbers) print( union_of_sets ) """ Explanation: Sets As characterized by the naming, sets are representing mathematical sets; no double occurences of elements Defined by keyword set of by curly braces with entities being separated by comma <b>Note</b>: As in maths, sets don't possess ordering, so there is no indexing of sets! End of explanation """ # looping in lists simply parsing along the list for s in sport_list: print( s ) print() # looping in dictionaries happens along keys for s in sports_days: print( '{}: \t{}'.format( s, sports_days[ s ] ) ) """ Explanation: Flow Control Standards commands as for, while, ... Functions for specific purposes <b>Note:</b> Since commands and their concept are quite self-explaining, only short description of syntax is provided For Loops for loops in Python allow looping along every so-called iterable as, e.g., list, tuple, dicts.... <b>Note</b>: Not necessarily int Syntax: for i in iterable: <b>Note:</b> Blocks are structured by indentation; sub-command (as, e.g., in a loop) are indented End of explanation """ # initialize variables sum_primes = 0 _n = 0 # sum primes up to sum-value of 20 while sum_primes < 20: # add prime of according index sum_primes += first_prime_numbers[ _n ] # increase index _n += 1 print( 'Sum of first {} primes is {}.'.format( _n, sum_primes ) ) """ Explanation: While Loops while loops in Python are (as usual) constructed by checking condition and exiting loop if condition becomes False <b>Note:</b> Blocks are structured by indentation; sub-command (as, e.g., in a loop) are indented End of explanation """ def get_n_th_prime( n, first_prime_numbers ): ''' DOC String IN: index of prime number, list of prime numbers OUT: n-th prime number ''' # do something smart as, e.g., checking that according index really exists # "assert" does the job by checking first arg and--if not being TRUE--providing text given as second arg try: val = first_prime_numbers[ n - 1 ] except: return '"ERROR: Index not feasible!"' # NOTE: since counting starts at 0, (n-1)st number is returned # Furthermore, there is no need for a function here; a simple reference would have done the job! return first_prime_numbers[ n - 1 ] # show doc string print( help( get_n_th_prime ) ) # apply functions N = 3 print( '{}. prime number is {}.'.format( N, get_n_th_prime( N, first_prime_numbers ) ) ) print() N = 30 print( '{}. prime number is {}.'.format( N, get_n_th_prime( N, first_prime_numbers ) ) ) """ Explanation: Functions Defined by key-word def followed by list of arguments in brackets Doc string defined directly after def by ''' TEXT ''' Values returned by key word return; <b>Note:</b> return "value" can be scalar, list, dict, vector, maxtrix,... End of explanation """
weissmanlab/magic
magicplots.ipynb
mit
import numpy as np %matplotlib inline import matplotlib import matplotlib.pyplot as plt import magic """ Explanation: Setup Import packages and modules End of explanation """ stats = ['your_prefix1', 'your_prefix2', 'your_prefix3'] # Example: ['chr1_pair', 'chr1_tbl', 'chr2_pair', 'itip'] """ Explanation: Change these names to the appropriate values: List all the prefixes of the different output files you want to plot: End of explanation """ pairstats = ['your_prefix1', 'your_prefix2'] # Example: ['chr1_pair', 'chr2_pair'] # Every element of pairstats should also be in stats # (We separate out the pairwise ones because they're the only ones where # talking about "N_e(t)" even begins to make sense.) """ Explanation: List just the prefixes that correspond to pairwise coalescence time distributions: End of explanation """ # These are set to default values that convert human data to years. mu = 1.25e-8 # mutation rate gen = 30 # generation time """ Explanation: Unscale times and "N_e" MAGIC scales all times by the per-base mutation rate. To change to clock time, you need to know the mutation rate per generation and the generation time. Enter them here: End of explanation """ tfactor = mu/gen # For the pairwise distributions, there is an additional factor of 2 to convert from total branch length to TMRCA: tmrcafactor = 2*tfactor nfactor = 4*mu """ Explanation: These are the corresponding conversion factors: End of explanation """ MAGIC = {stat: {} for stat in stats} for stat in stats: with open(stat + '_final.txt', 'r') as infile: # If you generated your *_final.txt files using magic.py's (default) '--family pieceexp': MAGIC[stat]['T'] = magic.PiecewiseExponential(*np.transpose([[float(x) for x in line.split()] for line in infile])) # If you used the '--family gammamix' option: # MAGIC[stat]['T'] = magic.GammaMix([float(x) for line in infile for x in line.split()]) with open(stat + '_LT.txt'.format(stat), 'r') as infile: MAGIC[stat]['LT'] = np.array([[float(x) for x in line.split()] for line in infile]) """ Explanation: Import data End of explanation """ plt.style.use('seaborn-talk') matplotlib.rcParams['axes.labelsize'] = 20 matplotlib.rcParams['axes.titlesize'] = 24 matplotlib.rcParams['xtick.labelsize'] = 20 matplotlib.rcParams['xtick.major.size'] = 10 matplotlib.rcParams['xtick.minor.size'] = 5 matplotlib.rcParams['ytick.labelsize'] = 20 matplotlib.rcParams['ytick.major.size'] = 10 tableau20 = np.array([(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120), (44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150), (148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148), (227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199), (188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)])/255 statcolors = {stat: tableau20[i*20//len(stats)] for i, stat in enumerate(stats)} """ Explanation: Plots Style settings End of explanation """ tvals = np.logspace(-6, -1.5, 200) for stat in stats: plt.plot(tvals, MAGIC[stat]['T'].cdf(tvals), c=statcolors[stat], label=stat) plt.xscale('log') plt.ylim(0,1) plt.xlim(1e-6,1.2e-2) plt.legend(bbox_to_anchor=(.2, .99)) plt.xlabel('Scaled time, $\mu t$') plt.ylabel('Probability tree length is less than t, $P(T<t)$') tlim = plt.gca().get_xlim() plt.twiny() plt.xscale('log') plt.xlim(t/tfactor for t in tlim) plt.xlabel('Time (years)') """ Explanation: Cumulative distributions End of explanation """ # Plot the original inferred Laplace transform values as points, and show fitted curves. # The top x-axis gives the time scale that each point roughly corresponds to. # Note that small s ~ long time, and vice versa. srange = np.logspace(0, 5) for stat in stats: plt.errorbar(*zip(*MAGIC[stat]['LT']), fmt='o', c=statcolors[stat], label=stat) plt.plot(srange, MAGIC[stat]['T'].lt(srange), c=statcolors[stat]) plt.xscale('log') plt.legend(bbox_to_anchor=(.2, .4)) plt.xlabel('Laplace transform variable, $s$') plt.ylabel('Laplace transform') slim = plt.gca().get_xlim() plt.twiny() plt.xscale('log') plt.xlim(1/(s*tfactor) for s in slim) plt.xlabel(r'Time scale (years)') """ Explanation: Laplace transforms End of explanation """ pairrange = np.logspace(-5.5, -2, 200) for stat in pairstats: plt.plot(pairrange, MAGIC[stat]['T'].ne(pairrange), c=statcolors[stat], label=stat) plt.xscale('log') plt.xlabel(r'Scaled time, $\mu T$') plt.ylabel(r'Scaled inverse coalescence rate, $2\mu/c(t)$') plt.tick_params(which='both', top='off', right='off') plt.legend(bbox_to_anchor=(.2, .3)) tlim = plt.gca().get_xlim() nlim = plt.gca().get_ylim() plt.twinx() plt.ylim(n/nfactor for n in nlim) plt.ylabel(r'"Effective population size", $N_e(t)$') plt.twiny() plt.xscale('log') plt.xlim(t/tmrcafactor for t in tlim) plt.xlabel('Time (years ago)') """ Explanation: "Effective population size" End of explanation """
jsharpna/DavisSML
2018_material/labs/lab1-soln.ipynb
mit
# Import the necessary packages import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import LeaveOneOut from sklearn import linear_model, neighbors %matplotlib inline plt.style.use('ggplot') # Where to save the figures PROJECT_ROOT_DIR = ".." datapath = PROJECT_ROOT_DIR + "/data/lifesat/" plt.rcParams["figure.figsize"] = (8,6) """ Explanation: Lab 1: Nearest Neighbor Regression and Overfitting This is based on the notebook file 01 in Aurélien Geron's github page End of explanation """ # Download CSV from http://stats.oecd.org/index.aspx?DataSetCode=BLI oecd_bli = pd.read_csv(datapath+"oecd_bli_2015.csv", thousands=',') oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"] oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value") oecd_bli.columns oecd_bli["Life satisfaction"].head() # Load and prepare GDP per capita data # Download data from http://goo.gl/j1MSKe (=> imf.org) gdp_per_capita = pd.read_csv(datapath+"gdp_per_capita.csv", thousands=',', delimiter='\t', encoding='latin1', na_values="n/a") gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True) gdp_per_capita.set_index("Country", inplace=True) full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True) full_country_stats.sort_values(by="GDP per capita", inplace=True) _ = full_country_stats.plot("GDP per capita",'Life satisfaction',kind='scatter') """ Explanation: Load and prepare data End of explanation """ xvars = ['Self-reported health','Water quality','Quality of support network','GDP per capita'] X = np.array(full_country_stats[xvars]) y = np.array(full_country_stats['Life satisfaction']) """ Explanation: Here's the full dataset, and there are other columns. I will subselect a few of them by hand. End of explanation """ def loo_risk(X,y,regmod): """ Construct the leave-one-out square error risk for a regression model Input: design matrix, X, response vector, y, a regression model, regmod Output: scalar LOO risk """ loo = LeaveOneOut() loo_losses = [] for train_index, test_index in loo.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] regmod.fit(X_train,y_train) y_hat = regmod.predict(X_test) loss = np.sum((y_hat - y_test)**2) loo_losses.append(loss) return np.mean(loo_losses) def emp_risk(X,y,regmod): """ Return the empirical risk for square error loss Input: design matrix, X, response vector, y, a regression model, regmod Output: scalar empirical risk """ regmod.fit(X,y) y_hat = regmod.predict(X) return np.mean((y_hat - y)**2) lin1 = linear_model.LinearRegression(fit_intercept=False) print('LOO Risk: '+ str(loo_risk(X,y,lin1))) print('Emp Risk: ' + str(emp_risk(X,y,lin1))) """ Explanation: I will define the following functions to expedite the LOO risk and the Empirical risk. End of explanation """ # knn = neighbors.KNeighborsRegressor(n_neighbors=5) """ Explanation: As you can see, the empirical risk is much less than the leave-one-out risk! This can happen in more dimensions. Nearest neighbor regression Use the method described here: http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html I have already imported the necessary module, so you just need to use the regression object (like we used LinearRegression) End of explanation """ LOOs = [] MSEs = [] K=30 Ks = range(1,K+1) for k in Ks: knn = neighbors.KNeighborsRegressor(n_neighbors=k) LOOs.append(loo_risk(X,y,knn)) MSEs.append(emp_risk(X,y,knn)) plt.plot(Ks,LOOs,'r',label="LOO risk") plt.title("Risks for kNN Regression") plt.plot(Ks,MSEs,'b',label="Emp risk") plt.legend() _ = plt.xlabel('k') """ Explanation: Exercise 1 For each k from 1 to 30 compute the nearest neighbors empirical risk and LOO risk. Plot these as a function of k and reflect on the bias-variance tradeoff here. (Hint: use the previously defined functions) End of explanation """ X1 = np.array(full_country_stats[['Self-reported health']]) LOOs = [] MSEs = [] K=30 Ks = range(1,K+1) for k in Ks: knn = neighbors.KNeighborsRegressor(n_neighbors=k) LOOs.append(loo_risk(X1,y,knn)) MSEs.append(emp_risk(X1,y,knn)) plt.plot(Ks,LOOs,'r',label="LOO risk") plt.title("Risks for kNN Regression") plt.plot(Ks,MSEs,'b',label="Emp risk") plt.legend() _ = plt.xlabel('k') """ Explanation: I decided to see what the performance is for k from 1 to 30. We see that the bias does not dominate until k exceeds 17, the performance is somewhat better for k around 12. This demonstrates that you can't trust the Empirical risk, since it includes the training sample. We can compare this LOO risk to that of linear regression (0.348) and see that it outperforms linear regression. Exercise 2 Do the same but for the reduced predictor variables below... End of explanation """
zklgame/CatEyeNets
test/BatchNormalization.ipynb
mit
import os os.chdir(os.getcwd() + '/..') # Run some setup code for this notebook import random import numpy as np import matplotlib.pyplot as plt from utils.data_utils import get_CIFAR10_data from utils.metrics_utils import rel_error %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 data = get_CIFAR10_data('datasets/cifar-10-batches-py', subtract_mean=True) for k, v in data.items(): print('%s: ' % k, v.shape) """ Explanation: Batch Normalization One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3]. The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated. The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features. It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension. [3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", ICML 2015. End of explanation """ from layers.layers import batchnorm_forward # Check the training-time forward pass by checking means ans variances # of features both before and after batch normalization # Simulate the forward pass for a two-layer network np.random.seed(231) N, D1, D2, D3 = 200, 50, 60, 3 X = np.random.randn(N, D1) W1 = np.random.randn(D1, D2) W2 = np.random.randn(D2, D3) a = np.maximum(0, X.dot(W1)).dot(W2) print('Before batch normalization:') print(' means: ', a.mean(axis=0)) print(' stds: ', a.std(axis=0)) print # Means should be close to zero and stds close to one print('After batch normalization (gamma=1, beta=0)') a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'}) print(' mean: ', a_norm.mean(axis=0)) print(' std: ', a_norm.std(axis=0)) print # Now means should be close to beta and stds close to gamma gamma = np.asarray([1.0, 2.0, 3.0]) beta = np.asarray([11.0, 12.0, 13.0]) a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'}) print('After batch normalization (nontrivial gamma, beta)') print(' means: ', a_norm.mean(axis=0)) print(' stds: ', a_norm.std(axis=0)) from layers.layers import batchnorm_backward # Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass. np.random.seed(231) N, D1, D2, D3 = 200, 50, 60, 3 W1 = np.random.randn(D1, D2) W2 = np.random.randn(D2, D3) bn_param = {'mode': 'train'} gamma = np.ones(D3) beta = np.zeros(D3) for t in range(50): X = np.random.randn(N, D1) a = np.maximum(0, X.dot(W1)).dot(W2) batchnorm_forward(a, gamma, beta, bn_param) bn_param['mode'] = 'test' X = np.random.randn(N, D1) a = np.maximum(0, X.dot(W1)).dot(W2) a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param) # Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print('After batch normalization (test-time):') print(' means: ', a_norm.mean(axis=0)) print(' stds: ', a_norm.std(axis=0)) """ Explanation: Batch normalization: Forward In the file layers/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation. End of explanation """ from layers.layers import batchnorm_backward from utils.gradient_check import eval_numerical_gradient_array # Gradient check batchnorm backward pass np.random.seed(231) N, D = 4, 5 x = 5 * np.random.randn(N, D) + 12 gamma = np.random.randn(D) beta = np.random.randn(D) dout = np.random.randn(N, D) bn_param = {'mode': 'train'} # x、gamma、beta will change everywhere once change in any where # so lambda can receive no param!!! fx = lambda _: batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda _: batchnorm_forward(x, gamma, beta, bn_param)[0] fb = lambda _: batchnorm_forward(x, gamma, beta, bn_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout) _, cache = batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = batchnorm_backward(dout, cache) print('dx error: ', rel_error(dx_num, dx)) print('dgamma error: ', rel_error(da_num, dgamma)) print('dbeta error: ', rel_error(db_num, dbeta)) a = 1 def fs(ss): print(ss) f = lambda a: fs(a) def ff(f, c): f(c) b = 2 ff(f, b) """ Explanation: Batch Normalization: backward Now implement the backward pass for batch normalization in the function batchnorm_backward. To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass. Once you have finished, run the following to numerically check your backward pass. End of explanation """ # TODO # np.random.seed(231) # N, D = 100, 500 # x = 5 * np.random.randn(N, D) + 12 # gamma = np.random.randn(D) # beta = np.random.randn(D) # dout = np.random.randn(N, D) # bn_param = {'mode': 'train'} # out, cache = batchnorm_forward(x, gamma, beta, bn_param) # t1 = time.time() # dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache) # t2 = time.time() # dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache) # t3 = time.time() # print('dx difference: ', rel_error(dx1, dx2)) # print('dgamma difference: ', rel_error(dgamma1, dgamma2)) # print('dbeta difference: ', rel_error(dbeta1, dbeta2)) # print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2))) """ Explanation: Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit) In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper. Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster. NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it. End of explanation """ from classifiers.fc_net import FullyConnectedNet from utils.gradient_check import eval_numerical_gradient np.random.seed(231) N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for reg in [0, 3.14]: print('Running check with reg = ', reg) model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, reg=reg, weight_scale=5e-2, dtype=np.float64, use_batchnorm=True) loss, grads = model.loss(X, y) print('Initial loss: ', loss) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))) if reg == 0: print() """ Explanation: Fully Connected Nets with Batch Normalization Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file classifiers/fc_net.py. Modify your implementation to add batch normalization. Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation. HINT: You might find it useful to define an additional helper layer similar to those in the file layer_utils.py. If you decide to do so, do it in the file classifiers/fc_net.py. End of explanation """ from base.solver import Solver np.random.seed(231) # Try training a very deep net with batchnorm hidden_dims = [100, 100, 100, 100, 100] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } weight_scale = 2e-2 bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True) model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False) bn_solver = Solver(bn_model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=200) bn_solver.train() solver = Solver(model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=200) solver.train() """ Explanation: Batchnorm for deep networks Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization. End of explanation """ plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label='baseline') plt.plot(bn_solver.loss_history, 'o', label='batchnorm') plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label='baseline') plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm') plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label='baseline') plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm') for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() """ Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster. End of explanation """ np.random.seed(231) # Try training a very deep net with batchnorm hidden_dims = [50, 50, 50, 50, 50, 50, 50] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } bn_solvers = {} solvers = {} weight_scales = np.logspace(-4, 0, num=20) for i, weight_scale in enumerate(weight_scales): print('Running weight scale %d / %d' % (i + 1, len(weight_scales))) bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True) model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False) bn_solver = Solver(bn_model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=False, print_every=200) bn_solver.train() bn_solvers[weight_scale] = bn_solver solver = Solver(model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=False, print_every=200) solver.train() solvers[weight_scale] = solver # Plot results of weight scale experiment best_train_accs, bn_best_train_accs = [], [] best_val_accs, bn_best_val_accs = [], [] final_train_loss, bn_final_train_loss = [], [] for ws in weight_scales: best_train_accs.append(max(solvers[ws].train_acc_history)) bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history)) best_val_accs.append(max(solvers[ws].val_acc_history)) bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history)) final_train_loss.append(np.mean(solvers[ws].loss_history[-100:])) bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:])) plt.subplot(3, 1, 1) plt.title('Best val accuracy vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Best val accuracy') plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline') plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm') plt.legend(ncol=2, loc='lower right') plt.subplot(3, 1, 2) plt.title('Best train accuracy vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Best training accuracy') plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline') plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm') plt.legend() plt.subplot(3, 1, 3) plt.title('Final training loss vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Final training loss') plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline') plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm') plt.legend() plt.gca().set_ylim(1.0, 3.5) plt.gcf().set_size_inches(10, 15) plt.show() """ Explanation: Batch normalization and initialization We will now run a small experiment to study the interaction of batch normalization and weight initialization. The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale. End of explanation """
certik/climate
RSS.ipynb
mit
#!wget http://www.remss.com/data/msu/data/netcdf/uat4_tb_v03r03_avrg_chTLT_197812_201308.nc3.nc #!mv uat4_tb_v03r03_avrg_chTLT_197812_201308.nc3.nc data/ #!wget http://www.remss.com/data/msu/data/netcdf/uat4_tb_v03r03_anom_chTLT_197812_201308.nc3.nc #!mv uat4_tb_v03r03_anom_chTLT_197812_201308.nc3.nc data/ """ Explanation: Remote Sensing Systems (RSS, http://www.ssmi.com/) provide machine readable curated datasets of satellite measurements, and the website also explains how they were obtained, processed etc. The temperature data is called MSU (Microwave Sounding Units), that operated between 1978-2005, and AMSU (Advanced Microwave Sounding Units) from 1998. They provide 4 main datasets: TLT (Temperature Lower Troposphere): MSU channel 2 by subtracting measurements made at different angles from each other TMT (Temperature Middle Troposphere): MSU channel 2 TTS (Temperature Troposphere Stratosphere): MSU channel 3 TLS (Temperature Lower Stratosphere): MSU channel 4 The AMSU also provides channels 10-14 (datasets available from RSS), which measure temperatures higher in the stratosphere than the highest MSU channel (4). End of explanation """ %pylab inline import urllib2 import os from IPython.display import Image def download(url, dir): """Saves file 'url' into 'dir', unless it already exists.""" filename = os.path.basename(url) fullpath = os.path.join(dir, filename) if os.path.exists(fullpath): print "Already downloaded:", filename else: print "Downloading:", filename open(fullpath, "w").write(urllib2.urlopen(url).read()) download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_TTS.txt", "data") download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_TLS.txt", "data") download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_tlt_land.txt", "data") download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_tlt_ocean.txt", "data") download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_tmt_land.txt", "data") download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_tmt_ocean.txt", "data") D = loadtxt("data/std_atmosphere_wt_function_chan_TTS.txt", skiprows=6) h = D[:, 1] wTTS = D[:, 5] D = loadtxt("data/std_atmosphere_wt_function_chan_TLS.txt", skiprows=6) assert max(abs(h-D[:, 1])) < 1e-12 wTLS = D[:, 5] D = loadtxt("data/std_atmosphere_wt_function_chan_tlt_land.txt", skiprows=7) assert max(abs(h-D[:, 1])) < 1e-12 wTLT_land = D[:, 5] D = loadtxt("data/std_atmosphere_wt_function_chan_tlt_ocean.txt", skiprows=7) assert max(abs(h-D[:, 1])) < 1e-12 wTLT_ocean = D[:, 5] D = loadtxt("data/std_atmosphere_wt_function_chan_tmt_land.txt", skiprows=7) assert max(abs(h-D[:, 1])) < 1e-12 wTMT_land = D[:, 5] D = loadtxt("data/std_atmosphere_wt_function_chan_tmt_ocean.txt", skiprows=7) assert max(abs(h-D[:, 1])) < 1e-12 wTMT_ocean = D[:, 5] figure(figsize=(3, 8)) plot(wTLS, h/1000, label="TLS") plot(wTTS, h/1000, label="TTS") plot(wTMT_ocean, h/1000, label="TMT ocean") plot(wTMT_land, h/1000, label="TMT land") plot(wTLT_ocean, h/1000, label="TLT ocean") plot(wTLT_land, h/1000, label="TLT land") xlim([0, 0.2]) ylim([0, 50]) legend() ylabel("Height [km]") show() Image(url="http://www.ssmi.com/msu/img/wt_func_plot_for_web_2012.all_channels2.png", embed=True) """ Explanation: Weight functions End of explanation """ from netCDF4 import Dataset from numpy.ma import average rootgrp = Dataset('data/uat4_tb_v03r03_avrg_chtlt_197812_201504.nc3.nc') list(rootgrp.variables) # 144 values, interval [-180, 180] longitude = rootgrp.variables["longitude"][:] # 72 values, interval [-90, 90] latitude = rootgrp.variables["latitude"][:] # 144 rows of [min, max] longitude_bounds = rootgrp.variables["longitude_bounds"][:] # 72 rows of [min, max] latitude_bounds = rootgrp.variables["latitude_bounds"][:] # time in days, 1978 - today time = rootgrp.variables["time"][:] # time in years years = time / 365.242 + 1978 # 12 values: time in days for 12 months in a year time_climatology = rootgrp.variables["climatology_time"][:] # (time, latitude, longitude) brightness_temperature = rootgrp.variables["brightness_temperature"][:] # (time_climatology, latitude, longitude) brightness_temperature_climatology = rootgrp.variables["brightness_temperature_climatology"][:] """ Explanation: Netcdf data End of explanation """ S_theta = pi / 36 * sin(pi/144) * cos(latitude*pi/180) sum(144 * S_theta)-4*pi """ Explanation: We need to calculate the element area (on a unit sphere) as follows: $$ S_{\theta\phi} = \int_{\theta_{min}}^{\theta_{max}} \int_{\phi_{min}}^{\phi_{max}} \sin\theta\, d \theta d \phi = (\cos\theta_{min} - \cos\theta_{max})(\phi_{max} - \phi_{min}) $$ Note that $-180 \le \phi \le 180$ is longitude and $0 \le \theta \le 180$ is something like latitude. Introducing $\Delta\theta = \theta_{max} - \theta_{min}$, $\Delta\phi = \phi_{max} - \phi_{min}$ and $\theta = {\theta_{max} + \theta_{min} \over 2}$ we can write: $$ S_{\theta\phi} = (\cos(\theta-{\Delta\theta\over2}) - \cos(\theta+{\Delta\theta\over 2})) \Delta \phi = 2 \Delta\phi \sin\theta\, \sin{\Delta\theta\over 2} $$ For $\Delta\theta = \Delta\phi = 2.5 {\pi\over 180} = {\pi\over 72}$ we finally obtain: $$ S_\theta = 2 {\pi\over 72} \sin {\pi\over 2\cdot72} \, \sin\theta = {\pi\over 36} \sin {\pi\over 144} \, \sin\theta $$ Finally, we would like to use $\theta$ for latitude, so we need to substitute $\theta \to \theta + {\pi\over 2}$: $$ S_\theta = {\pi\over 36} \sin {\pi\over 144} \, \sin(\theta+{\theta\over 2}) = {\pi\over 36} \sin {\pi\over 144} \, \cos\theta $$ As a check, we calculate the surface of the unit sphere (equal to $4\pi$): $$ \sum_{\theta}144S_\theta = 4\pi $$ End of explanation """ w_theta = sin(pi/144) * cos(latitude*pi/180) sum(w_theta) Tavg = average(brightness_temperature, axis=2) Tavg = average(Tavg, axis=1, weights=w_theta) plot(years, Tavg-273.15) xlabel("Year") ylabel("T [C]") title("TLT (Temperature Lower Troposphere)") show() """ Explanation: Let's create averaging weights that are normalized to 1 as follows: $$ w_\theta = S_\theta {144\over 4\pi} = \sin{\pi\over144}\cos\theta $$ $$ \sum_\theta w_\theta = 1 $$ End of explanation """ Tanom = empty(Tavg.shape) for i in range(12): Tanom[i::12] = Tavg[i::12] - average(Tavg[i::12]) """ Explanation: The temperature oscillates each year. To calculate the "anomaly", we subtract from each month its average temperature: End of explanation """ from scipy.stats import linregress # Skip the first year, start from 1979, that's why you see the "12" here and below: n0 = 12 # use 276 for the year 2001 Y0 = years[n0] a, b, _, _, adev = linregress(years[n0:]-Y0, Tanom[n0:]) print "par dev" print a, adev print b """ Explanation: We calculate linear fit End of explanation """ from matplotlib.ticker import MultipleLocator figure(figsize=(6.6, 3.5)) plot(years, Tanom, "b-", lw=0.7) plot(years, a*(years-Y0)+b, "b-", lw=0.7, label="Trend = $%.3f \pm %.3f$ K/decade" % (a*10, adev*10)) xlim([1979, 2016]) ylim([-1.2, 1.2]) gca().xaxis.set_minor_locator(MultipleLocator(1)) legend() xlabel("Year") ylabel("Temperature Anomaly [K]") title("TLT (Temperature Lower Troposphere)") show() Image(url="http://www.remss.com/data/msu/graphics/TLT/plots/RSS_TS_channel_TLT_Global_Land_And_Sea_v03_3.png", embed=True) """ Explanation: And compare against official graph + trend. As can be seen, the agreement is perfect: End of explanation """
sergivalverde/cnn-ms-lesion-segmentation
example_1.ipynb
gpl-3.0
%load_ext autoreload %autoreload 2 import os from collections import OrderedDict from base import * from build_model_nolearn import cascade_model from config import * """ Explanation: Multiple Sclerosis (MS) lesion segmentation of MRI images using a cascade of two 3D convolutional neural networks This script assumes that Lasagne and nolearn have been installed correctly and CUDA / CUDNN are configured. Import libraries: End of explanation """ options = {} # -------------------------------------------------- # Experiment parameters # -------------------------------------------------- # image modalities used (T1, FLAIR, PD, T2, ...) options['modalities'] = ['T1', 'FLAIR'] # Select an experiment name to store net weights and segmentation masks options['experiment'] = 'test_CNN' # In order to expose the classifier to more challeging samples, a threshold can be used to to select # candidate voxels for training. Note that images are internally normalized to 0 mean 1 standard deviation # before applying thresholding. So a value of t > 0.5 on FLAIR is reasonable in most cases to extract # all WM lesion candidates options['min_th'] = 0.5 # randomize training features before fitting the model. options['randomize_train'] = True # Select between pixel-wise or fully-convolutional training models. Although implemented, fully-convolutional # models have been not tested with this cascaded model options['fully_convolutional'] = False # -------------------------------------------------- # model parameters # -------------------------------------------------- # 3D patch size. So, far only implemented for 3D CNN models. options['patch_size'] = (11,11,11) # percentage of the training vector that is going to be used to validate the model during training options['train_split'] = 0.25 # maximum number of epochs used to train the model options['max_epochs'] = 200 # maximum number of epochs without improving validation before stopping training (early stopping) options['patience'] = 25 # Number of samples used to test at once. This parameter should be around 50000 for machines # with less than 32GB of RAM options['batch_size'] = 50000 # net print verbosity. Set to zero for this particular notebook, but setting this value to 11 is recommended options['net_verbose'] = 0 # post-processing binary threshold. After segmentation, probabilistic masks are binarized using a defined threshold. options['t_bin'] = 0.8 # The resulting binary mask is filtered by removing lesion regions with lesion size before a defined value options['l_min'] = 20 """ Explanation: Model configuration: Configure the model options. Options are passed to the model using the dictionary options. The main options are: End of explanation """ exp_folder = os.path.join(os.getcwd(), options['experiment']) if not os.path.exists(exp_folder): os.mkdir(exp_folder) os.mkdir(os.path.join(exp_folder,'nets')) os.mkdir(os.path.join(exp_folder,'.train')) # set the output name options['test_name'] = 'cnn_' + options['experiment'] + '.nii.gz' """ Explanation: Experiment configuration: Organize the experiment. Although not necessary, intermediate results, network weights and final lesion segmentation masks are stored inside a folder with name options['experiment']. This is extremely useful when a lot of experiments are computed on the same images to declutter the user space. End of explanation """ train_folder = '/mnt/DATA/w/CNN/images/train_images' train_x_data = {} train_y_data = {} # TRAIN X DATA train_x_data['im1'] = {'T1': os.path.join(train_folder,'im1', 'T1.nii.gz'), 'FLAIR': os.path.join(train_folder,'im1', 'FLAIR.nii.gz')} train_x_data['im2'] = {'T1': os.path.join(train_folder,'im2', 'T1.nii.gz'), 'FLAIR': os.path.join(train_folder,'im2', 'FLAIR.nii.gz')} train_x_data['im3'] = {'T1': os.path.join(train_folder,'im3', 'T1.nii.gz'), 'FLAIR': os.path.join(train_folder,'im3', 'FLAIR.nii.gz')} # TRAIN LABELS train_y_data['im1'] = os.path.join(train_folder,'im1', 'lesion_bin.nii.gz') train_y_data['im2'] = os.path.join(train_folder,'im2', 'lesion_bin.nii.gz') train_y_data['im3'] = os.path.join(train_folder,'im3', 'lesion_bin.nii.gz') """ Explanation: Load the training data: Training data is internally loaded by the method. So far, training and testing images are passed as dictionaries, where each training image is stored as follows: traininig_X_data['image_identifier'] = {'modality_1': /path/to/image_modality_n.nii.gz/, .... 'modality_n': /path/to/image_modality_n.nii.gz/} And also for labels: traininig_y_data['image_identifier_1'] = 'path/to/image_labels.nii.gz/' NOTE: As stated in the paper, input images have been already skull-stripped and bias corrected (N3, etc...) by the user before running the classifer. End of explanation """ options['weight_paths'] = os.getcwd() model = cascade_model(options) """ Explanation: Initialize the model: The model is initialized using the function cascade_model, which returns a list of two NeuralNet objects. Optimized weights are stored also inside the experiment folder for future use (testing different images without re-training the model. End of explanation """ model = train_cascaded_model(model, train_x_data, train_y_data, options) """ Explanation: Train the model: The function train_cascaded_model is used to train the model. The next image summarizes the training procedure. For further information about how this function optimizes the two CNN, please consult the original paper. (NOTE: For this example, options['net_verbose] has been set to 0 for simplicity) End of explanation """ # TEST X DATA test_folder = '/mnt/DATA/w/CNN/images/test_images' test_x_data = {} test_x_data['im1'] = {'T1': os.path.join(test_folder,'im1', 'T1.nii.gz'), 'FLAIR': os.path.join(test_folder,'im1', 'FLAIR.nii.gz')} # set the output_location of the final segmentation. In this particular example, # we are training and testing on the same images options['test_folder'] = test_folder options['test_scan'] = 'im1' out_seg = test_cascaded_model(model, test_x_data, options) """ Explanation: Test the model: Once the model has been trained, it can e tested on other images. Please note that the same image modalities have to be used. Testing images are loaded equally to training_data, so a dictionary defines the modalities used: test_X_data['image_identifier'] = {'modality_1': /path/to/image_modality_n.nii.gz/, .... 'modality_n': /path/to/image_modality_n.nii.gz/} End of explanation """ from metrics import * # load the GT annotation for the tested image GT = nib.load(os.path.join(test_folder,'im1', 'lesion_bin.nii.gz')).get_data() """ Explanation: Compute different metrics on tested data: End of explanation """
Kaggle/learntools
notebooks/computer_vision/raw/ex5.ipynb
apache-2.0
# Setup feedback system from learntools.core import binder binder.bind(globals()) from learntools.computer_vision.ex5 import * # Imports import os, warnings import matplotlib.pyplot as plt from matplotlib import gridspec import numpy as np import tensorflow as tf from tensorflow.keras.preprocessing import image_dataset_from_directory # Reproducability def set_seed(seed=31415): np.random.seed(seed) tf.random.set_seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) os.environ['TF_DETERMINISTIC_OPS'] = '1' set_seed() # Set Matplotlib defaults plt.rc('figure', autolayout=True) plt.rc('axes', labelweight='bold', labelsize='large', titleweight='bold', titlesize=18, titlepad=10) plt.rc('image', cmap='magma') warnings.filterwarnings("ignore") # to clean up output cells # Load training and validation sets ds_train_ = image_dataset_from_directory( '../input/car-or-truck/train', labels='inferred', label_mode='binary', image_size=[128, 128], interpolation='nearest', batch_size=64, shuffle=True, ) ds_valid_ = image_dataset_from_directory( '../input/car-or-truck/valid', labels='inferred', label_mode='binary', image_size=[128, 128], interpolation='nearest', batch_size=64, shuffle=False, ) # Data Pipeline def convert_to_float(image, label): image = tf.image.convert_image_dtype(image, dtype=tf.float32) return image, label AUTOTUNE = tf.data.experimental.AUTOTUNE ds_train = ( ds_train_ .map(convert_to_float) .cache() .prefetch(buffer_size=AUTOTUNE) ) ds_valid = ( ds_valid_ .map(convert_to_float) .cache() .prefetch(buffer_size=AUTOTUNE) ) """ Explanation: Introduction In these exercises, you'll build a custom convnet with performance competitive to the VGG16 model from Lesson 1. Get started by running the code cell below. End of explanation """ from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ # Block One layers.Conv2D(filters=32, kernel_size=3, activation='relu', padding='same', input_shape=[128, 128, 3]), layers.MaxPool2D(), # Block Two layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Three # YOUR CODE HERE # ____, # Head layers.Flatten(), layers.Dense(6, activation='relu'), layers.Dropout(0.2), layers.Dense(1, activation='sigmoid'), ]) # Check your answer q_1.check() #%%RM_IF(PROD)%% from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ # Block One layers.Conv2D(filters=32, kernel_size=3, activation='relu', padding='same', input_shape=[128, 128, 3]), layers.MaxPool2D(), # Block Two layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Three layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'), layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Head layers.Flatten(), layers.Dense(6, activation='relu'), layers.Dropout(0.2), layers.Dense(1, activation='sigmoid'), ]) q_1.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q_1.hint() #_COMMENT_IF(PROD)_ q_1.solution() """ Explanation: Design a Convnet Let's design a convolutional network with a block architecture like we saw in the tutorial. The model from the example had three blocks, each with a single convolutional layer. Its performance on the "Car or Truck" problem was okay, but far from what the pretrained VGG16 could achieve. It might be that our simple network lacks the ability to extract sufficiently complex features. We could try improving the model either by adding more blocks or by adding convolutions to the blocks we have. Let's go with the second approach. We'll keep the three block structure, but increase the number of Conv2D layer in the second block to two, and in the third block to three. <figure> <!-- <img src="./images/2-convmodel-2.png" width="250" alt="Diagram of a convolutional model."> --> <img src="https://i.imgur.com/Vko6nCK.png" width="250" alt="Diagram of a convolutional model."> </figure> 1) Define Model Given the diagram above, complete the model by defining the layers of the third block. End of explanation """ model.compile( optimizer=tf.keras.optimizers.Adam(epsilon=0.01), # YOUR CODE HERE: Add loss and metric ) # Check your answer q_2.check() model.compile( optimizer=tf.keras.optimizers.Adam(epsilon=0.01), loss='binary_crossentropy', metrics=['binary_accuracy'], ) q_2.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q_2.hint() #_COMMENT_IF(PROD)_ q_2.solution() """ Explanation: 2) Compile To prepare for training, compile the model with an appropriate loss and accuracy metric for the "Car or Truck" dataset. End of explanation """ history = model.fit( ds_train, validation_data=ds_valid, epochs=50, ) """ Explanation: Finally, let's test the performance of this new model. First run this cell to fit the model to the training set. End of explanation """ import pandas as pd history_frame = pd.DataFrame(history.history) history_frame.loc[:, ['loss', 'val_loss']].plot() history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot(); """ Explanation: And now run the cell below to plot the loss and metric curves for this training run. End of explanation """ # View the solution (Run this code cell to receive credit!) q_3.check() """ Explanation: 3) Train the Model How would you interpret these training curves? Did this model improve upon the model from the tutorial? End of explanation """
sueiras/training
tensorflow/02-text/01-char_languaje_model/Text_generation_with_Quijote.ipynb
gpl-3.0
# Header from __future__ import print_function import numpy as np import tensorflow as tf print('Tensorflow version: ', tf.__version__) import time #Show images import matplotlib.pyplot as plt %matplotlib inline # plt configuration plt.rcParams['figure.figsize'] = (10, 10) # size of images plt.rcParams['image.interpolation'] = 'nearest' # show exact image plt.rcParams['image.cmap'] = 'gray' # use grayscale # GPU devices visible by python import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"]="0" path = '/home/ubuntu/data/training/text/quijote/' """ Explanation: Create a RNN model to text generation RNN model at character level Input: n character previous Output: next character Model LSTM Use 'El Quijote' to train the generator End of explanation """ #Read book text = open(path + "pg2000.txt").read().lower() print('corpus length:', len(text)) # Simplify text to improve the semantic capacities of the model. delete_chars = [ '"', '#', '$', '%', "'", '(', ')', '*', '-', '/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '@', '[', ']', '«', '»', 'à', 'ï', 'ù', '\ufeff'] for ch in delete_chars: text=text.replace(ch,"") print('corpus length deleted:', len(text)) chars = sorted(list(set(text))) print('Chars list: ', chars) print('total chars:', len(chars)) #Dictionaries to convert char to num & num to char char_indices = dict((c, i) for i, c in enumerate(chars)) indices_char = dict((i, c) for i, c in enumerate(chars)) # cut the text in semi-redundant sequences of maxlen characters # One sentence of length 20 for each 3 characters maxlen = 20 step = 3 sentences = [] next_chars = [] for i in range(3000, len(text) - maxlen, step): #Start in character 3000 to exclude Gutenberg header. sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) print('nb sequences:', len(sentences)) print(sentences[4996], '-', next_chars[4996]) """ Explanation: Download data and generate sequences Download quijote from guttenberg project wget http://www.gutenberg.org/cache/epub/2000/pg2000.txt End of explanation """ ''' X: One row by sentence in each row a matrix of bool 0/1 of dim length_sentence x num_chars coding the sentence. Dummy variables y: One row by sentence in each row a vector of bool of lengt num_chars with 1 in the next char position ''' print('Vectorization...') X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) #X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.float16) #y = np.zeros((len(sentences), len(chars)), dtype=np.float16) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): X[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 print('X shape: ',X.shape) print('y shape: ',y.shape) # build the model: 2 stacked LSTM from tensorflow.contrib.keras import models, layers, optimizers print('Build model 1') seq_prev_input = layers.Input(shape=(maxlen, len(chars)), name='prev') # apply forwards LSTM forwards1 = layers.LSTM(1024, return_sequences=True, dropout=0.3, recurrent_dropout=0.3)(seq_prev_input) forwards2 = layers.LSTM(1024, return_sequences=True, dropout=0.3, recurrent_dropout=0.3)(forwards1) forwards3 = layers.LSTM(1024, return_sequences=False, dropout=0.3, recurrent_dropout=0.3)(forwards2) output = layers.Dense(len(chars), activation='softmax')(forwards3) model = models.Model(inputs=seq_prev_input, outputs=output) model.summary() # try using different optimizers and different optimizer configs nadam = optimizers.Nadam(lr=0.0002, schedule_decay=0.000025) model.compile(loss='categorical_crossentropy', optimizer=nadam, metrics=['accuracy']) #Plot the model graph from tensorflow.contrib.keras import utils # Create model image utils.plot_model(model, '/tmp/model.png') # Show image plt.imshow(plt.imread('/tmp/model.png')) #Fit model history = model.fit(X[:600000], y[:600000], batch_size=256, epochs=12, validation_data=(X[600000:], y[600000:])) import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (8, 8) plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.show() # Save model models.save_model(model, path + 'models/text_generation_model1024.h5') """ Explanation: Train the model End of explanation """ # Load model model1 = models.load_model(path + 'models/text_generation_model1024.h5') maxlen = 20 def sample(a, diversity=1.0): ''' helper function to sample an index from a probability array - Diversity control the level of randomless ''' a = np.log(a) / diversity a = np.exp(a) / np.sum(np.exp(a), axis=0) a /= np.sum(a+0.0000001) #Precission error return np.argmax(np.random.multinomial(1, a, 1)) def generate_text(sentence, diversity, current_model, num_char=400): sentence_init = sentence generated = '' for i in range(400): x = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(sentence): x[0, t, char_indices[char]] = 1. preds = current_model.predict(x, verbose=0)[0] next_index = sample(preds, diversity) next_char = indices_char[next_index] generated += next_char sentence = sentence[1:] + next_char print() print('DIVERSITY: ',diversity) print(sentence_init + generated) sentence = 'mire vuestra merced ' generate_text(sentence, 0.2, model1) sentence = 'mire vuestra merced ' generate_text(sentence, 0.2, model1) generate_text(sentence, 0.5, model1) generate_text(sentence, 1, model1) generate_text(sentence, 1.2, model1) sentence = 'a mi señora dulcinea' generate_text(sentence, 0.2, model1) generate_text(sentence, 0.5, model1) generate_text(sentence, 1, model1) generate_text(sentence, 1.2, model1) sentence = 'el caballero andante' generate_text(sentence, 0.2, model1) generate_text(sentence, 0.5, model1) generate_text(sentence, 1, model1) generate_text(sentence, 1.2, model1) """ Explanation: Evaluate model End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/machine_learning_in_the_enterprise/labs/model_monitoring.ipynb
apache-2.0
import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" # Import necessary libraries import os import sys import IPython assert sys.version_info.major == 3, "This notebook requires Python 3." # Install Python package dependencies. # Upgrade the specified packages to the available versions print("Installing TensorFlow 2.4.1 and TensorFlow Data Validation (TFDV)") ! pip3 install {USER_FLAG} --quiet --upgrade tensorflow==2.4.1 tensorflow_data_validation[visualization] ! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests ! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform ! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-storage==1.32.0 # Automatically restart kernel after installing new packages. if not os.getenv("IS_TESTING"): print("Restarting kernel...") app = IPython.Application.instance() app.kernel.do_shutdown(True) print("Done.") import os import random import sys import time # Import required packages. import numpy as np """ Explanation: Monitoring Vertex AI Model Overview What is Model Monitoring? Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include: software versioning rigorous deployment processes event logging alerting/notification of situations requiring intervention on-demand and automated diagnostic tracing automated performance and functional testing You should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services. Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions: How well do recent service requests match the training data used to build your model? This is called training-serving skew. How significantly are service requests evolving over time? This is called drift detection. If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that you can anticipate problems before they affect your customer experiences or your revenue streams. Learning Objectives Deploy a pre-trained model. Configure model monitoring. Generate some artificial traffic. Interpret the data reported by the model monitoring feature. Introduction In this notebook, you will deploy a pre-trained model to an endpoint and generate some prediction requests on the model. You will also create a monitoring job to keep an eye on the model quality and generate test data to trigger alerting. The example model The model you'll use in this notebook is based on this blog post. The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information: identity - unique player identitity numbers demographic features - information about the player, such as the geographic region in which a player is located behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player. The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the Solution Notebook for reference. Before you begin Setup your dependencies End of explanation """ # Make sure to replace [your-project-id] with your GCP project ID PROJECT_ID = "[your-project-id]" REGION = "us-central1" SUFFIX = "aiplatform.googleapis.com" API_ENDPOINT = f"{REGION}-{SUFFIX}" PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}" if os.getenv("IS_TESTING"): !gcloud --quiet components install beta !gcloud --quiet components update !gcloud config set project $PROJECT_ID !gcloud config set ai/region $REGION """ Explanation: Set up your Google Cloud project For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible. End of explanation """ # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # If on Google Cloud Notebooks, then don't execute this code if not IS_GOOGLE_CLOUD_NOTEBOOK: if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' # Enable AI services !gcloud services enable aiplatform.googleapis.com """ Explanation: Login to your Google Cloud account and enable AI services End of explanation """ # @title Utility functions import copy import os from google.cloud.aiplatform_v1beta1.services.endpoint_service import \ EndpointServiceClient from google.cloud.aiplatform_v1beta1.services.job_service import \ JobServiceClient from google.cloud.aiplatform_v1beta1.services.prediction_service import \ PredictionServiceClient from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import ( ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig, ModelDeploymentMonitoringScheduleConfig) from google.cloud.aiplatform_v1beta1.types.model_monitoring import ( ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig, SamplingStrategy, ThresholdConfig) from google.cloud.aiplatform_v1beta1.types.prediction_service import \ PredictRequest from google.protobuf import json_format from google.protobuf.duration_pb2 import Duration from google.protobuf.struct_pb2 import Value DEFAULT_THRESHOLD_VALUE = 0.001 def create_monitoring_job(objective_configs): # Create sampling configuration. random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE) sampling_config = SamplingStrategy(random_sample_config=random_sampling) # Create schedule configuration. duration = Duration(seconds=MONITOR_INTERVAL) schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration) # Create alerting configuration. emails = [USER_EMAIL] email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails) alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config) # Create the monitoring job. endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}" predict_schema = "" analysis_schema = "" job = ModelDeploymentMonitoringJob( display_name=JOB_NAME, endpoint=endpoint, model_deployment_monitoring_objective_configs=objective_configs, logging_sampling_strategy=sampling_config, model_deployment_monitoring_schedule_config=schedule_config, model_monitoring_alert_config=alerting_config, predict_instance_schema_uri=predict_schema, analysis_instance_schema_uri=analysis_schema, ) options = dict(api_endpoint=API_ENDPOINT) client = JobServiceClient(client_options=options) parent = f"projects/{PROJECT_ID}/locations/{REGION}" response = client.create_model_deployment_monitoring_job( parent=parent, model_deployment_monitoring_job=job ) print("Created monitoring job:") print(response) return response def get_thresholds(default_thresholds, custom_thresholds): thresholds = {} default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE) for feature in default_thresholds.split(","): feature = feature.strip() thresholds[feature] = default_threshold for custom_threshold in custom_thresholds.split(","): pair = custom_threshold.split(":") if len(pair) != 2: print(f"Invalid custom skew threshold: {custom_threshold}") return feature, value = pair thresholds[feature] = ThresholdConfig(value=float(value)) return thresholds def get_deployed_model_ids(endpoint_id): client_options = dict(api_endpoint=API_ENDPOINT) client = EndpointServiceClient(client_options=client_options) parent = f"projects/{PROJECT_ID}/locations/{REGION}" response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}") model_ids = [] for model in response.deployed_models: model_ids.append(model.id) return model_ids def set_objectives(model_ids, objective_template): # Use the same objective config for all models. objective_configs = [] for model_id in model_ids: objective_config = copy.deepcopy(objective_template) objective_config.deployed_model_id = model_id objective_configs.append(objective_config) return objective_configs def send_predict_request(endpoint, input): client_options = {"api_endpoint": PREDICT_API_ENDPOINT} client = PredictionServiceClient(client_options=client_options) params = {} params = json_format.ParseDict(params, Value()) request = PredictRequest(endpoint=endpoint, parameters=params) inputs = [json_format.ParseDict(input, Value())] request.instances.extend(inputs) response = client.predict(request) return response def list_monitoring_jobs(): client_options = dict(api_endpoint=API_ENDPOINT) parent = f"projects/{PROJECT_ID}/locations/us-central1" client = JobServiceClient(client_options=client_options) response = client.list_model_deployment_monitoring_jobs(parent=parent) print(response) def pause_monitoring_job(job): client_options = dict(api_endpoint=API_ENDPOINT) client = JobServiceClient(client_options=client_options) response = client.pause_model_deployment_monitoring_job(name=job) print(response) def delete_monitoring_job(job): client_options = dict(api_endpoint=API_ENDPOINT) client = JobServiceClient(client_options=client_options) response = client.delete_model_deployment_monitoring_job(name=job) print(response) # Sampling distributions for categorical features... DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110} LANGUAGE = { "en-us": 4807, "en-gb": 678, "ja-jp": 419, "en-au": 310, "en-ca": 299, "de-de": 147, "en-in": 130, "en": 127, "fr-fr": 94, "pt-br": 81, "es-us": 65, "zh-tw": 64, "zh-hans-cn": 55, "es-mx": 53, "nl-nl": 37, "fr-ca": 34, "en-za": 29, "vi-vn": 29, "en-nz": 29, "es-es": 25, } OS = {"IOS": 3980, "ANDROID": 3798, "null": 253} MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74} COUNTRY = { "United States": 4395, "India": 486, "Japan": 450, "Canada": 354, "Australia": 327, "United Kingdom": 303, "Germany": 144, "Mexico": 102, "France": 97, "Brazil": 93, "Taiwan": 72, "China": 65, "Saudi Arabia": 49, "Pakistan": 48, "Egypt": 46, "Netherlands": 45, "Vietnam": 42, "Philippines": 39, "South Africa": 38, } # Means and standard deviations for numerical features... MEAN_SD = { "julianday": (204.6, 34.7), "cnt_user_engagement": (30.8, 53.2), "cnt_level_start_quickplay": (7.8, 28.9), "cnt_level_end_quickplay": (5.0, 16.4), "cnt_level_complete_quickplay": (2.1, 9.9), "cnt_level_reset_quickplay": (2.0, 19.6), "cnt_post_score": (4.9, 13.8), "cnt_spend_virtual_currency": (0.4, 1.8), "cnt_ad_reward": (0.1, 0.6), "cnt_challenge_a_friend": (0.0, 0.3), "cnt_completed_5_levels": (0.1, 0.4), "cnt_use_extra_steps": (0.4, 1.7), } DEFAULT_INPUT = { "cnt_ad_reward": 0, "cnt_challenge_a_friend": 0, "cnt_completed_5_levels": 1, "cnt_level_complete_quickplay": 3, "cnt_level_end_quickplay": 5, "cnt_level_reset_quickplay": 2, "cnt_level_start_quickplay": 6, "cnt_post_score": 34, "cnt_spend_virtual_currency": 0, "cnt_use_extra_steps": 0, "cnt_user_engagement": 120, "country": "Denmark", "dayofweek": 3, "julianday": 254, "language": "da-dk", "month": 9, "operating_system": "IOS", "user_pseudo_id": "104B0770BAE16E8B53DF330C95881893", } """ Explanation: Define some helper functions Run the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made. End of explanation """ # TODO # Import your model MODEL_NAME = "churn" IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest" ARTIFACT = "gs://mco-mm/churn" output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)" print("model output: ", output) MODEL_ID = # TODO: Your code goes here print(f"Model {MODEL_NAME}/{MODEL_ID} created.") """ Explanation: Import your model The churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. If you've already imported your model, you can skip this step. End of explanation """ # TODO # Deploy your model to the endpoint ENDPOINT_NAME = "churn" output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)" print("endpoint output: ", output) ENDPOINT = output[-1] ENDPOINT_ID = # TODO: Your code goes here output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100" DEPLOYED_MODEL_ID = # TODO: Your code goes here print( f"Model {MODEL_NAME}/{MODEL_ID}/{DEPLOYED_MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}/{ENDPOINT}." ) """ Explanation: Deploy your endpoint Now that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns. Run the next cell to deploy your model to an endpoint. This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one. End of explanation """ import pprint as pp print(ENDPOINT) print("request:") pp.pprint(DEFAULT_INPUT) try: resp = send_predict_request(ENDPOINT, DEFAULT_INPUT) print("response") pp.pprint(resp) except Exception: print("prediction request failed") """ Explanation: Run a prediction test Now that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON. Try this now by running the next cell and examine the results. End of explanation """ USER_EMAIL = "[your-mail-id]" JOB_NAME = "churn" # Sampling rate (optional, default=.8) LOG_SAMPLE_RATE = 0.8 # Monitoring Interval in seconds (optional, default=3600). MONITOR_INTERVAL = 3600 # URI to training dataset. DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # Prediction target column name in training dataset. TARGET = "churned" # Skew and drift thresholds. SKEW_DEFAULT_THRESHOLDS = "country,language" SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" DRIFT_DEFAULT_THRESHOLDS = "country,language" DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" """ Explanation: Taking a closer look at the results, we see the following elements: churned_values - a set of possible values (0 and 1) for the target field churned_probs - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively) predicted_churn - based on the probabilities, the predicted value of the target field (1) This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring job Now that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality. In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields: Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables. Monitor interval - the time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds). Target field - the prediction target column name in training dataset. Skew detection threshold - the skew threshold for each feature you want to monitor. Prediction drift threshold - the drift threshold for each feature you want to monitor. End of explanation """ # TODO # Create a monitoring job skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS) drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS) skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig( skew_thresholds=skew_thresholds ) drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig( drift_thresholds=drift_thresholds ) training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET) training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI) objective_config = ModelMonitoringObjectiveConfig( training_dataset=training_dataset, training_prediction_skew_detection_config=skew_config, prediction_drift_detection_config=drift_config, ) model_ids = get_deployed_model_ids(ENDPOINT_ID) objective_template = ModelDeploymentMonitoringObjectiveConfig( objective_config=objective_config ) objective_configs = set_objectives(model_ids, objective_template) monitoring_job = # TODO: Your code goes here # Run a prediction request to generate schema, if necessary. try: _ = send_predict_request(ENDPOINT, DEFAULT_INPUT) print("prediction succeeded") except Exception: print("prediction failed") """ Explanation: Create your monitoring job The following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running. End of explanation """ !gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/* """ Explanation: After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like: As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage. End of explanation """ # TODO # Generate test predictions to trigger the thresholds def random_uid(): digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"] return "".join(random.choices(digits, k=32)) def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}): # Use random sampling and mean/sd with gaussian distribution to model # training data. Then modify sampling distros for two categorical features # and mean/sd for two numerical features. mean_sd = MEAN_SD.copy() country = COUNTRY.copy() for k, (mean_fn, sd_fn) in perturb_num.items(): orig_mean, orig_sd = MEAN_SD[k] mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd)) for k, v in perturb_cat.items(): country[k] = v for i in range(0, count): input = DEFAULT_INPUT.copy() input["user_pseudo_id"] = str(random_uid()) input["country"] = random.choices([*country], list(country.values()))[0] input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0] input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0]) input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0]) input["month"] = random.choices([*MONTH], list(MONTH.values()))[0] for key, (mean, sd) in mean_sd.items(): sample_val = round(float(np.random.normal(mean, sd, 1))) val = max(sample_val, 0) input[key] = val print(f"Sending prediction {i}") try: send_predict_request(ENDPOINT, input) except Exception: print("prediction request failed") time.sleep(sleep) print("Test Completed.") test_time = 300 tests_per_sec = 1 sleep_time = 1 / tests_per_sec iterations = # TODO: Your code goes here perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)} perturb_cat = {"Japan": max(COUNTRY.values()) * 2} monitoring_test(iterations, sleep_time, perturb_num, perturb_cat) """ Explanation: You will notice the following components in these Cloud Storage paths: cloud-ai-platform-.. - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket. [model_monitoring|instance_schemas]/job-.. - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. instance_schemas/job-../analysis - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.). instance_schemas/job-../predict - This is the first prediction made to your model after the current monitoring job was enabled. model_monitoring/job-../serving - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic. model_monitoring/job-../training - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfaces In the previous cells, you created a monitoring job using the Python client library. You can also use the gcloud command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alerting Now you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later. End of explanation """
kevinjliang/Duke-Tsinghua-MLSS-2017
01B_TensorFlow_Fundamentals.ipynb
apache-2.0
import tensorflow as tf """ Explanation: The TensorFlow Tutorial I wish I had This notebook is adapted from notes that I took when learning TensorFlow. It provides a slow, thorough introduction to the fundamentals of TensorFlow, answering questions like: What exactly is TensorFlow? Why do we need it? How does the computation graph work? What are operations, variables, placeholders, and sessions? It assumes you know a good amount of Python. It starts from scratch with TensorFlow, but does not teach numpy or linear algebra; you won't necessarily need those skills to follow this tutorial, but understanding numpy basics, and how to "vectorize" your code, is an important skill for data scientists. Feel free to ask about resources for this if you're curious! What exactly does TensorFlow do? What is TensorFlow and what exactly does it do? Why do we need it? Why can't we do deep learning with pure Python or numpy? To answer these questions, you need to understand the problems TensorFlow trying to solve. Python's limitations Python is what's known as a dynamically typed programming language. This means that while your program is running, it stores in memory (alongside each variable) information about a variable's type. When you run x + y in Python, for example, Python's runtime looks up x's type, y's type, then figures out how to add those two things together, or throws an error if it doesn't know how. (For example, try adding a list and a string.) This is called a runtime type check. This automatic bookkeeping makes Python a pleasure to use, but also leads to inefficiencies. If we store a long list of numbers, for instance, we must allocate memory not just for the data itself but for each number's metadata (type information). If we then want to sum the list, using a loop, for instance, we need to do type checks for every addition operation we perform. This makes pure Python nearly unuseable for processing large datasets. Numpy for fast arithmetic That's where numpy comes in. In pure Python, a list is a Python object that holds other Python objects, each with its own metadata. The numpy package, on the other hand, exposes a new kind of list, called a numpy array, that holds values all of the same type. Since a numpy array must hold values only of one type, we can store the metadata once for the whole array, instead of separately for each element. Furthermore, since numpy array elements are all of one type, they are all guaranteed to be the same size in memory, which allows us to store them more compactly and access them more quickly. (In pure Python, if you stored all elements "next to each other" in memory, it would be costly to calculate, say, where in memory the 100th item started, as this would depend on the sizes of each previous object. So Python actually stores elements all over the place in memory, then keeps an "index" of the memory locations of each element of the array. This means to sum 100 elements, Python needs to look in the index 100 times and go all over your RAM to find the numbers you want to add. Numpy just stores the 100 items in a row, and since they're all the same size, it's easy to calculate where the 5th or 100th or 1000th item is pretty much instantly.) In addition to this compact array type, numpy provides a number of operations, implemented in C, that manipulate these arrays, taking advantage of their compact representation. Arrays can be multidimensional, so when we talk about operations on arrays, that includes what you might think of as matrix operations (like fast matrix multiplication) too. numpy is wonderful, and enables Python programmers to work efficiently with vast amounts of data. It is the foundation of higher-level packages like pandas and scipy. But numpy's design choices make certain tradeoffs; tensorflow makes a different set of choices and accordingly has a different set of tradeoffs. Downsides of numpy Even though single numpy operations are blazing-fast, when composing numpy operations, things can get a bit slower, because between each operation, you still have to go back and forth between Python's intepreter and numpy's C code. Especially when inspecting numpy values using, say, print, or logging them to files, you incur a cost, because a full translation must be made between the numpy value you are converting and the corresponding pure Python type that can interact with other Python code. The second downside of numpy really applies only to deep learning applications. The classic algorithm for training a deep model is (stochastic) gradient descent using backpropagation, which requires taking lots of derivatives. Before TensorFlow and other similar libraries, programmers manually (i.e., using pen and paper) did the calculus, deriving the symbolic gradient of the function to be minimized, then writing special code to take partial derivatives at an arbitrary input point. This is mechanical work that a computer should be able to do automatically. But numpy's structure does not provide an easy way of computing these derivatives automatically. Why? Automatically computing the derivative of some formula requires having some representation of that formula in memory. But when you run numpy operations, they simply execute and return their results; no trace is left of the steps used to get from first input to final output. There is no easy way to go back and compute derivatives later on in a program. Enter TensorFlow TensorFlow solves both these problems with the idea of a computation graph. Unlike in numpy, where data goes back and forth between Python and C for each operation, in TensorFlow, the programmer creates, in pure Python, an object-oriented representation of the entire computation she wishes to perform. This representation is called a "graph," in the graph theory sense: the nodes in the graph are operations, and the edges represent the data flowing from one operation to the next. Building the graph is like writing down a formula. No data is actually being processed. As such, all of TensorFlow's graph-building functions are lightweight and fast, simply creating a description of computation in memory. Once the graph is complete, it is sent to TensorFlow's low-level execution algorithm. That algorithm, written (like much of numpy) in C and C++, performs all the requested operations at once, then returns any values of interest (as specified by the user) back to the Python world. Because an entire graph of computation is processed at once, there is much less shuttling back and forth between one representation and another. And because the computation graph is essentially an in-memory record of every step used to compute each value in your program, TensorFlow is able to do the necessary calculus automatically, computing gradients based on the structure of that graph. The default graph, operations, and tensors Let's get started by importing TensorFlow. End of explanation """ g = tf.get_default_graph() g """ Explanation: Even before you call your first TensorFlow function, a lot is going on behind the scenes. For example, an empty default graph object is created. (In order to make it easier to hit the ground running with TensorFlow, and to make using the library less verbose, Google has laced the system with global state. In order to fully understand what's happening in your program, you need to know that these state variables, like "current graph" and "current scope" and "current session", exist. As we'll soon see, most of the functions you'll call in TensorFlow operate by quietly accessing this hidden state.) We can access the default graph explicitly using tf.get_default_graph: End of explanation """ g.get_operations() """ Explanation: It is currently empty. We can check this fact by listing the operations (nodes) in the graph: End of explanation """ tf.constant(3.14) g.get_operations() """ Explanation: Let's start adding some operations to g. An operation is a node of the computation graph. It contains only some light metadata, like "I am an addition operation, and my inputs come from these two other operations." Although Python Operation objects don't actually do anything, we usually think of an operation in terms of what it will cause the execution engine to do after the graph has been completely built and is handed over to TensorFlow to run. Every operation takes in some number of inputs (0 or more), and produces 0 or more outputs. Its outputs can become the inputs to other operations. Executing an operation can also have side effects, like printing something to the console, recording data to a file, or modifying a variable in memory. Again, all this computation happens after the graph has been completely built. The Operation object itself is simply a description of computation that will take place. Perhaps the simplest operation we can create is constant. It has zero inputs and one output. When we create a constant operation, we define what that constant output will be (this is stored as metadata on the Operation object we create). TensorFlow's tf.constant function creates a constant operation and adds it to the default graph: End of explanation """ const_operation = g.get_operations()[0] len(const_operation.inputs), len(const_operation.outputs) """ Explanation: g now has a Const operation! Note that tf.constant affected the graph g, even though we didn't explicitly say we wanted the constant operation to be added to g. It is possible to add operations to a specific, non-default graph, but most of the time, we add directly to the default graph, using functions like tf.constant. In fact, we generally don't even call get_default_graph to give g a name; we just use it implicitly. Let's examine the constant operation we created. We can use the inputs and outputs attributes of the operation to confirm that there really are zero inputs and one output. End of explanation """ const_tensor = const_operation.outputs[0] const_tensor """ Explanation: Those inputs and outputs are of type Tensor. End of explanation """ another_const_tensor = tf.constant(1.414) another_const_tensor g.get_operations() """ Explanation: A Tensor is a lightweight Python object that represents a piece of data flowing along the edges of our graph. That data can be a scalar, a vector, a matrix, or a higher-dimensional array. (The dimensionality of a tensor t is accessible as an attribute: t.shape. Here, a shape of (), Python's empty tuple, means that t is a scalar.) The tensor's data is not actually stored inside the Tensor object, and in fact does not exist in Python at all. t does not know it will refer to the value 3.14. A Tensor is just a lightweight way to reference a piece of data that will be computed by TensorFlow's execution engine. All tensors are the outputs of some operation already in the graph. We use Tensors for two things: 1. When we eventually tell TensorFlow to run our computation graph, we need to let it know which of the many intermediate values we actually want it to report back to us. To do this, we pass in a list of tensors, references to specific values it will compute, that we wish it to "fetch." 2. When we create operations with inputs, we need to tell them which other operations' outputs to consume. To do this, we specify input tensors. To see this in action, let's create another constant operation, then an addition operation that consumes the output tensors of each constant and sums them. First, we call tf.constant to create our second constant. As a convenience, the tf.constant function actually returns the output tensor of the constant operation it creates: End of explanation """ sum_tensor = tf.add(const_tensor, another_const_tensor) sum_tensor g.get_operations() add_op = g.get_operations()[2] len(add_op.inputs), len(add_op.outputs) """ Explanation: Now, there are two operations in the graph. TensorFlow has named them 'Const' and 'Const_1', but you can also give them names yourself by passing a name keyword argument to the tf.constant function. Tensors are named following the formula op_name:output_number, so Const_1:0 means "the 0th output of the Const_1 operation." Let's create an addition operation to add the two tensors together, using tf.add. Again, this creates an operation in the default graph, and returns a reference to the operation's output -- a tensor. End of explanation """ # This piece is only necessary so as not to use up an ungodly amount of GPU memory: config = tf.ConfigProto() config.gpu_options.allow_growth = True # This is the actual code creating the session. You can omit the config arg # if you have no configuration to do. sess = tf.Session(config=config) sess.graph == g """ Explanation: It should make sense that add_op has two inputs and one output. Running computations using tf.Session In order to execute your computation graph, you need to create a Session object. A Session object stores a reference to the graph to execute, some configuration information (determining, for instance, how much memory TensorFlow should allocate on your GPU), and storage for stateful components of your computation graph (which must be kept around across multiple executions of the graph). We haven't talked about any "stateful components" yet, so it's okay if that piece doesn't make sense yet. We'll see Variables, the simplest stateful component, very soon. You can create a session using the tf.Session constructor. It can take in a graph parameter, but if you omit it, the session will use the default graph. End of explanation """ sess.run(sum_tensor), sess.run(const_tensor), sess.run(another_const_tensor) """ Explanation: The run method of the Session class is used to send your graph to the computation engine and execute it. As an argument, we pass a tensor we'd like to "fetch" the value of. Based on which tensor we'd like to compute, TensorFlow will calculate a path through the graph, and then execute only the parts of the graph necessary to compute our desired tensor. End of explanation """ sess.run([sum_tensor, const_tensor, another_const_tensor]) sess.run({'a': const_tensor, 'b': another_const_tensor, '[a,b,a+b]': [const_tensor, another_const_tensor, sum_tensor]}) """ Explanation: Above, we call sess.run three times, which invokes the execution engine three times. There is no memoization; each time you call sess.run, everything is computed anew. Because of this, if you want to fetch more than one tensor, it's more efficient to fetch them all in one go, by passing a list to sess.run. You can also pass a dictionary, a tuple, a named tuple, or nested combinations of these data structures. End of explanation """ sess.close() """ Explanation: In the last example, TensorFlow created a copy of the fetches data structure, but with each tensor replaced by its actual value, computed during execution. When you're done with a session, you should close it, to free any resources it's keeping track of: End of explanation """ a = tf.placeholder(tf.float32) a b = tf.placeholder(tf.float32) flexible_sum_tensor = tf.add(a, b) g.get_operations() """ Explanation: Sessions are more powerful than this, but to understand why, we need to talk about placeholders and variables. Placeholders Our program above is a bit rigid. If we wanted to perform the exact same addition calculation on different inputs, we would need to create a whole new trio of operations; we have no power to abstract. Placeholders fix this. A placeholder operation, just like a constant, has 0 inputs and 1 output. However, instead of fixing the output value when you define the operation, we pass the placeholder's value to sess.run when executing the graph. This allows us to execute the same graph multiple times with different placeholder values. To add a placeholder operation to the default graph, we use tf.placeholder, passing in the type of the value we'd like the placeholder to hold. Valid types include tf.int8, tf.bool, tf.float32, and a whole lot more. See the documentation for a complete list. Like tf.constant, tf.placeholder returns a tensor (the output of the placeholder). We'll make a graph for adding two floats. End of explanation """ sess = tf.Session(config=config) sess.run(flexible_sum_tensor, feed_dict={a: 1., b: 2.}) sess.run(flexible_sum_tensor, feed_dict={a: [1.], b: [2.]}) sess.run(flexible_sum_tensor, feed_dict={a: [[1., 2.], [3., 4.]], b: [[5., 6.], [7., 8.]]}) sess.close() """ Explanation: When we call sess.run, we now pass in a second argument, feed_dict. This is a Python dictionary in which the keys are placeholder tensors (i.e., the outputs of placeholder operations) and the values are numbers, numpy arrays, or Python lists. (Numbers, lists, and numpy arrays can all be converted automatically into a data format compatible with TensorFlow's execution engine.) Note that the keys of feed_dict are the actual tensor objects, not strings. End of explanation """ distance_from_origin = tf.sqrt((a * a) + (b * b)) g.get_operations() distance_from_origin sess = tf.Session(config=config) sess.run(distance_from_origin, feed_dict={a: 3., b: 4.}) sess.close() """ Explanation: A few things to note: 1. We were able to fetch the same tensor multiple times but with different feed_dict arguments. 2. The placeholder's dimension is dynamic. (You can optionally specify a static dimension when defining a placeholder, by passing a shape argument to tf.placeholder.) 3. The add operation works much like numpy's add operation, in that it can add (element-wise) two multidimensional arrays. It also supports broadcasting, if you're familiar with that from the numpy world. 4. When asked to fetch an array (single- or multi-dimensional), TensorFlow returns a numpy array. As an aside, the Python arithmetic operators ($+$, $-$, $/$, $*$) are overridden for the Tensor type. Evaluating the expression $a + b$, where $a$ and $b$ are tensors, has the side effect of adding an Add operation to the default graph with $a$ and $b$ as inputs. The result of evaluating $a + b$ is the output tensor of that add operation. This makes it easy to add many nodes quickly. For instance, End of explanation """ with tf.Session(config=config) as sess: print(sess.run(distance_from_origin, feed_dict={a: 9., b: 12.})) """ Explanation: Another trick: these last three lines can be condensed to two, using Python's with feature. End of explanation """ tf.reset_default_graph() g.get_operations() """ Explanation: The session is closed automatically at the end of the with block. Note that trying to call sess.run(distance_from_origin) without feeding in a and b will result in an error. Furthermore, placeholder values are not persistent across multiple calls to sess.run, so even if you've previously provided values for a and b in prior calls to sess.run, these are no longer in memory anywhere by the time you make your next sess.run call. One final trick for convenience. Since our graph is getting a bit crowded, we may want to clear it: End of explanation """ g = tf.get_default_graph() g.get_operations() """ Explanation: The operations are all still there! That's because reset_default_graph doesn't delete operations, it just creates a new graph and makes it the default. g still refers to the old graph. We can fix this (and let the old graph be garbage-collected) by reassigning g: End of explanation """ x = tf.Variable(42) # summarize the operations now in the graph: [(op, "{} inputs and {} output".format(len(op.inputs), len(op.outputs))) for op in g.get_operations()] """ Explanation: Variables Like constants and placeholders, variable operations take 0 inputs and produce 1 output; the big difference is that a variable is mutable and persistent across runs of your graph (within a session). Whereas a constant's value is fixed when creating the graph, and a placeholder's value is set anew each time you call sess.run, a variable's value is set or changed while the graph is running, by side-effect-ful "assign" operations, and remembered even after sess.run is finished. (That memory is in the Session object, which manages stateful components like variables. Calling sess.close is necessary to free that memory.) Let's use tf.Variable to add a variable to our new graph, initialized to 42. End of explanation """ sess = tf.Session(config=config) """ Explanation: Wow -- four operations were just added to the graph! Let's go through them one by one: 1. The first should look familiar. Although it's called Variable/initial_value, it is actually just a constant operation. Its output is a tensor that will evaluate to 42. 2. The second is a "Variable" operation. Like placeholder and constant, it has no inputs and one output. But its output type is int32_ref, a mutable int32. You can use an int32_ref basically anywhere you can use an int32. But you can also use it as an input to an assign operation, which is up next. 3. The third operation is an assign op. It takes two inputs, a ref to a mutable value (almost always the output of a Variable op), and a value, then assigns the variable to that value. This assign op takes its input from the Variable/initial_value constant and the Variable ref just discussed. When executed, it has the effect of initializing our variable to to 42. The output is another int32_ref tensor, referring to the same mutable storage as the output from (2). 4. Finally, we have an identity operation. It takes the variable ref as input and outputs a non-ref tf.int32. This isn't necessary in most cases, where it is ok to use the tf.int32_ref directly. $x$, the return value of the tf.Variable constructor, is a Variable object, not a tensor, but in practice, you can use it as if it were a tensor. It will either be interpreted as the tensor output of operation (2) above -- the int32_ref pointing to the variable's current value -- or of operation (4), depending on context. Let's start a session and play with our variable. End of explanation """ sess2 = tf.Session(config=config) """ Explanation: In fact, let's start a second session as well. One of the roles sessions play is to keep track of variable values across executions of the graph, and this will help us visualize that. End of explanation """ sess.run(x) """ Explanation: If we attempt to get the value of the variable, we will get an error: End of explanation """ sess.run(g.get_operations()[2]) sess.run(x) """ Explanation: "Attempting to use uninitialized value Variable." In order to initialize the variable, we actually have to run the assign op that was created for us. Recall that this was the third operation in the graph. If we want to run an operation just for its side effect, not to fetch its output, sess.run does support passing in an op directly anywhere you would normally pass a tensor (as part of the fetches data structure). Here, we'll just pass it standalone: End of explanation """ sess2.run(x) """ Explanation: But in sess2, $x$ is still not initialized, and running sess2.run(x) still gives an error: End of explanation """ sess2.run(x.initializer), sess2.run(x) """ Explanation: Let's fix that. End of explanation """ sess.close() sess2.close() tf.reset_default_graph() g = tf.get_default_graph() """ Explanation: Notice: 1. Variable objects have an initializer property that refers to the Assign operation that gives them their initial value. 2. Fetching an operation instead of a tensor performs its side effect but returns None -- even if the operation has an output. Let's return to a blank slate. End of explanation """ a = tf.constant(5, name='a') b = tf.constant(7, name='b') eqn = a + 2 * b g.get_operations() """ Explanation: Computing gradients In machine learning, we typically compute some loss function (or error) that quantifies how poorly our model is capturing the patterns in the data. In many models, this loss function is differentiable, which means we can compute partial derivatives of our loss function with respect to the parameters of our model. These derivatives essentially tell us: as each parameter changes, how does it affect the loss? Given this information, we have an easy way to make the model better: perturb the parameters slightly in the direction that causes the loss to go down, according to the partial derivatives. We can do this repeatedly, computing new partial derivatives after each step, until we get to a local minimum. This technique is called "gradient descent." One benefit of using a computation graph is that Tensorflow can automatically calculate these derivatives for us. More accurately, it can augment our computation graph with operations that compute any derivatives we'd like. To do this, we use the tf.gradients function. It takes in two arguments: ys and xs. ys is a tensor or list of tensors we'd like to calculate the derivatives of. xs is a tensor or list of tensors we'd like to calculate those derivatives with respect to. When called, tf.gradients traverses the computation graph backward from ys to xs, adding for each operation along that path one or more supplemental operations for computing the gradient. These individual-operation gradients can be composed using the chain rule (which tf.gradients also takes care of). The return value of tf.gradients is a handle to a tensor that represents the answer, $\frac{dy}{dx}$, at the end of that chain. This will hopefully become clearer with a simple example. Let's create a small graph: End of explanation """ grad = tf.gradients(eqn, b) g.get_operations() """ Explanation: Notice that Tensorflow has added a constant node, mul/x, to represent the constant 2. Other than that, this should look as expected: an op for a and for b, a multiplication, and an addition. (Remember, Python's operations + and * have been overridden for the tensor type, and they now have side effects of adding operations to the default graph!) Let's now calculate the derivative of eqn with respect to b. End of explanation """ grad """ Explanation: As you can see, a lot of new nodes were added for gradient calculation. The output of the last op listed above will be our derivative, $\frac{d\text{eqn}}{db}$. tf.gradients returns that tensor, so we don't have to grab it explicitly: End of explanation """ sess = tf.Session(config=config) sess.run(grad) """ Explanation: We can now execute the graph: End of explanation """ gradient_wrt_both = tf.gradients(eqn, [a, b]) sess.run(gradient_wrt_both) """ Explanation: As expected, we get 2, the partial derivative of eqn with respect to b at the point (a=5, b=7) (at any point, actually, but TensorFlow is computing it at this specific point). Let's look at a couple variations. If we pass multiple xs, we get a gradient back instead of just a partial derivative -- this is a list of partial derivatives. End of explanation """ eqn2 = a * b + b gradient_of_eqn2 = tf.gradients(eqn2, [a,b]) sess.run(gradient_of_eqn2) """ Explanation: Let's create a second equation. End of explanation """ gradient_with_two_ys = tf.gradients([eqn, eqn2], [a,b]) sess.run(gradient_with_two_ys) """ Explanation: Remember that a = 5 and b = 7, which is why we get the values we did above. Although you can think of what Tensorflow does as a kind of symbolic differentiation (it constructs a formula, i.e. computation graph, for computing a derivative), it always calculates gradients at a specific point. If we pass in more than one tensor as the first argument, Tensorflow computes the sum of the derivatives of each of these tensors with respect to the x values, $\sum_{y \in \text{ys}}\frac{dy}{dx}$. In other words, computing tf.gradients([eqn, eqn2], ...) below is not too different from tf.gradients(eqn + eqn2, ...), except that the latter actually adds ops to the graph to compute eqn + eqn2, whereas the former adds ops to sum the gradients. End of explanation """ sess.close() tf.reset_default_graph() g = tf.get_default_graph() """ Explanation: Here, [8, 8] == [7, 6] + [1, 2]. End of explanation """ x = tf.Variable(5.0) loss = x * x g.get_operations() """ Explanation: Optimization Why do we care about gradients? As mentioned above, being able to compute them so easily opens up the possibility of automatically implementing gradient descent, an algorithm that follows derivatives downhill to find a local minimum of some function. In machine learning, we typically want to find the model parameters that minimize some loss, or error, function that measures how bad our model is. TensorFlow provides amazing tools for doing just that, but to make sure we understand how they work, we'll implement a very simple gradient descent algorithm ourselves. We will be doing something very simple: finding the $x$ that minimizes $x^2$. (Yes, you can do this analytically, or by thinking about it for a second. But this example will bring together some of the ideas we've been discussing about variables and gradients.) Let's start building our simple computational graph. It will consist of a variable, $x$, which is initialized to 5. It will also contain a "loss function," $x^2$. End of explanation """ dloss_dx = 2. * x """ Explanation: Two start with, we'll calculate our derivative manually, without even using tf.gradients. The derivative of $x^2$ at a point is just $2x$: End of explanation """ step_size = 0.1 new_x_value = x - dloss_dx * step_size assign_new_x = tf.assign(x, new_x_value) """ Explanation: We will now create an operation that changes $x$ based on the current derivative of the loss. We can do this using the tf.assign function, which creates an operation that assigns a variable to a new value. (The Assign operation has two inputs, the reference to the variable's mutable storage, and the new value we wish to assign. As mentioned in the Variables section, we can pass a Variable object as the first input and TensorFlow will substitute the correct tensor. The Assign operation has one output, the tensor for which is returned by tf.assign, which is the new value of the variable after assignment.) Because the derivative points in the direction in which our loss is growing fastest, we will subtract $x$'s derivative from $x$. (After all, we want the loss to shrink, not grow.) But rather than subtract the entire derivative, we will subtract a fraction of it. (We generally try not to take large steps in gradient descent; just because the derivative points us in some direction now, doesn't mean that going arbitrarily far in that direction will keep bringing us down the hill. $x^2$ provides a good example: if we are currently at $x=5$, the derivative is $10$, but going $-10$ steps to $-5$ completely steps over the valley we are trying to reach at $x=0$! By multiplying our derivative by a small step size of, say, 0.1, we can avoid this fate and progress slowly but almost-surely to our goal.) End of explanation """ with tf.Session(config=config) as sess: # Initialize x: sess.run(x.initializer) # We will fetch the following tensors each iteration to_fetch = {'x': x, 'loss': loss, 'derivative': assign_new_x} for i in range(100): fetched = sess.run(to_fetch) # every tenth step, print our progress if i % 10 == 0 or i+1==100: print("Iter {}: {}".format(i, fetched)) """ Explanation: Our graph is in place. In order to run our algorithm, we need to: 1. Initialize x, by calling sess.run(x.initializer). 2. Repeatedly evaluate the assign_new_x operation. (Note: assign_new_x is actually a tensor, the output of the assignment operation. Evaluating this tensor will cause the operation to run. If we really wanted to run just the op, we could use assign_new_x.op; remember that every tensor knows which op it is the output of.) It is essential that we do all this inside of a single session, because that session is what will keep track of the current value of x. End of explanation """ tf.reset_default_graph() g = tf.get_default_graph() x = tf.Variable(5.0) loss = x * x # tf.gradients is called "gradients" and not "derivative" for a reason: it # returns a _list_ of partial derivatives, even if you only pass in one x. # Pull out first element (in our case, the list only has one element). dloss_dx = tf.gradients(loss, x)[0] new_x_value = x - dloss_dx * step_size assign_new_x = tf.assign(x, new_x_value) with tf.Session(config=config) as sess: # Initialize x: sess.run(x.initializer) # We will fetch the following tensors each iteration to_fetch = {'x': x, 'loss': loss, 'derivative': assign_new_x} for i in range(100): fetched = sess.run(to_fetch) # every tenth step, print our progress if i % 10 == 0 or i+1==100: print("Iter {}: {}".format(i, fetched)) """ Explanation: As you can see, the value of loss gets closer and closer to 0 as time goes on. Now let's replace our manual derivative calculation with automatic differentiation, and convince ourselves that nothing changes: End of explanation """ tf.reset_default_graph() g = tf.get_default_graph() x = tf.Variable(5.0) loss = x * x # Create the optimizer optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1) # Add optimize nodes to the graph minimize_loss = optimizer.minimize(loss) """ Explanation: The algorithm we have used here is vanilla gradient descent. TensorFlow actually comes with a whole family of optimizers that we can just plug into our model. We do not need to call tf.gradients or manually assign a variable at all; TensorFlow can create that whole portion of our computational graph. To do this, we use one of TensorFlow's premade Optimizer classes. An Optimizer object keeps track of some parameters of the optimization algorithm (like step size, also called learning rate), and has a minimize method that can be used to automatically construct the portions of the computation graph that perform gradient computation and gradient descent. Here's how our example from above looks with TensorFlow's GradientDescentOptimizer: End of explanation """ with tf.Session(config=config) as sess: # Initialize x: sess.run(x.initializer) # We will fetch the following tensors each iteration to_fetch = {'x': x, 'loss': loss, 'train_op': minimize_loss} for i in range(100): fetched = sess.run(to_fetch) # every tenth step, print our progress if i % 10 == 0 or i+1==100: print("Iter {}: {}".format(i, fetched)) """ Explanation: minimize_loss now refers to an operation that runs a single step of gradient descent, updating x. (By default optimizer.minimize assumes you want to run gradient descent on every Variable in the computation graph. If you want to change this, you can pass in a var_list argument, specifying exactly which variables should be updated.) Note that before, assign_new_x referred to a tensor; minimize_loss refers to an actual operation, and sess.running it will return None. Still, it is working, and produces the exact same values as our previous two attempts: End of explanation """ adam_optimizer = tf.train.AdamOptimizer(learning_rate=0.1) minimize_loss = adam_optimizer.minimize(loss) g.get_operations() """ Explanation: There are other optimizers that implement clever variations of gradient descent. For example, here's the popular Adam optimizer: End of explanation """ initialize_all = tf.global_variables_initializer() with tf.Session(config=config) as sess: # Initialize ALL variables sess.run(initialize_all) # We will fetch the following tensors each iteration to_fetch = {'x': x, 'loss': loss, 'train_op': minimize_loss} for i in range(100): fetched = sess.run(to_fetch) # every tenth step, print our progress if i % 10 == 0 or i+1==100: print("Iter {}: {}".format(i, fetched)) """ Explanation: Notice that some of the operations Adam has added are Variables! This is because Adam keeps track of certain statistics across iterations, to implement a sort of "momentum." Because of this, before we can run our minimize_loss operation, we need to make sure Adam's variables are initialized. Rather than painstakingly initialize each one, we can use tf.global_variables_initializer() to add an op to the graph that initializes all variables created so far. End of explanation """ import numpy as np temps = np.random.normal(55, 20, 1000) random_noise = np.random.normal(0, 100, 1000) hosp_visits = 1000 - 5 * temps + random_noise plt.plot(temps[:200], hosp_visits[:200], "o") """ Explanation: As you can see, for our extremely simple problem, Adam hurts more than it helps. Its momentum feature means that the big derivatives from early on in training still have an effect even as we get close to the valley, causing us to overshoot the minimum. Running this for more than 100 iterations would eventually bring us back to 0. (An image that is sometimes used to explain momentum-based optimization approaches vs. typical gradient descent is that if normal gradient descent can be thought of as a man slowly and cautiously walking downhill, momentum-based approaches are better understood as a heavy ball rolling its way downhill. The ball will likely overshoot the bottom then have to roll back down, taking a while to settle. The benefit is that the ball can roll straight past small local minima, and stay immune to certain types of pathological terrains.) Machine learning with TensorFlow So, how do we use all this for machine learning? At this point, you're likely ready to move onto any other TensorFlow tutorial you find on the Internet (or in this Github repo). But in case you want to stay here, we'll work our way through a simple example: we'll create a fake dataset of temperatures and number of hospital visits in a fictional city over a number of months. (We assume that colder temperatures lead to more hospital visits.) We'll then fit a linear model to this data. The goal of this exercise is for you to see how placeholders, variables, and gradient descent come together to enable fitting a model to some data. Although we'll use a linear model here -- i.e., we assume that the data y can be understood as a linear function of the input $x$ -- the process is the exact same when using a nonlinear model, like a neural network. Creating the fake data Let's begin by creating some fake data for us to train on. We'll make 1000 samples. We generate temperatures according to a normal distribution using numpy, then we generate hospital visit numbers according to the formula, 1000 - 5*temps, plus some random noise. End of explanation """ normalized_temps = (temps - np.mean(temps)) / np.std(temps) """ Explanation: It's common to perform some data cleaning operations on our input data before attempting to use a machine learning algorithm. We'll do that here, subtracting the (empirical) mean and dividing by the standard deviation of the temperature: End of explanation """ train_X, train_y = normalized_temps[:800], hosp_visits[:800] valid_X, valid_y = normalized_temps[800:900], hosp_visits[800:900] test_X, test_y = normalized_temps[900:], hosp_visits[900:] """ Explanation: Now, let's divide the data into training, validation, and test sets. End of explanation """ def batch_generator(X, y, batch_size): total_batches = len(X) // batch_size current_batch = 0 while True: start = batch_size * current_batch end = start + batch_size yield (X[start:end], y[start:end]) current_batch = (current_batch + 1) % total_batches training_generator = batch_generator(train_X, train_y, batch_size=100) # Later, call next(training_generator) to get a new batch of the form (X, y) """ Explanation: Finally, we'll create a "batch generator" for the training set. The following function is a Python generator function; instead of returning a value, it continuously yields new batches of data. When we call batch_generator, Python creates a generator iterator, which we here call training_generator, that we can use with Python's next function. End of explanation """ tf.reset_default_graph() g = tf.get_default_graph() """ Explanation: Building the model Now it's time to build our model. Let's first get a new, empty graph to work with. End of explanation """ X = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) m = tf.Variable(0.) b = tf.Variable(0.) predicted_y = X * m + b """ Explanation: We then set up the major quantities in our model as follows: 1. Our data, the temperatures X and the hospital visit numbers y, are represented with placeholders. This is so we can fill these values with our actual data at execution time. (You may wonder: why not just put all the data in as a constant? Typically, rather than use Gradient Descent, we use Stochastic Gradient Descent, which means that instead of taking gradients of the loss computed on all our data every iteration, we feed in a small "batch" of data to the graph each iteration of training. This is more efficient, lets us handle large datasets, and provably converges to a local minimum just like normal Gradient Descent. To use this technique, we need placeholders: each time we call sess.run, we'll pass in different data.) 2. The parameters of our model, which we hope to learn, are represented as TensorFlow variables. This is so that as we run the training operation repeatedly, their current values can change. 3. We then use TensorFlow operations like addition and multiplication to create predicted y values based on the X values, according to our model. The loss will be computed based on this prediction and its divergence from the true y. End of explanation """ avg_loss = tf.reduce_mean(tf.squared_difference(predicted_y, y)) """ Explanation: Computing the loss To compute the loss -- a quantity measuring how bad our model is -- we use sum-of-squares formula: for each data point in the current batch of data, we subtract the real y from our predicted_y and square the difference; then we take the average of all these. (Taking their sum would work just as well, but by taking the average, we get a number that doesn't depend on the amount of data in a batch, which can be useful for human interpretation and for comparing models with different batch sizes.) End of explanation """ train_one_step = tf.train.GradientDescentOptimizer(learning_rate=0.0005).minimize(avg_loss) """ Explanation: Creating an optimizer Now it's time to actually create our optimization op. We'll use the basic GradientDescentOptimizer here, with a learning rate of 0.0005. (How did we come by this number? Trying different numbers and seeing how the losses looked on the validation set. Feel free to play with this a bit more.) End of explanation """ init_all_vars = tf.global_variables_initializer() with tf.Session(config=config) as sess: sess.run(init_all_vars) for i in range(5000): X_batch, y_batch = next(training_generator) feed_dict = {X: X_batch, y: y_batch} _, loss, m_pred, b_pred = sess.run([train_one_step, avg_loss, m, b], feed_dict=feed_dict) if i % 500 == 0: validation_feed_dict = {X: valid_X, y: valid_y} valid_loss = sess.run(avg_loss, feed_dict=validation_feed_dict) print("Iter {}: training loss = {}, validation loss = {}, m={}, b={}".format(i, loss, valid_loss, m_pred, b_pred)) test_feed_dict = {X: test_X, y: test_y} m_pred, b_pred, loss = sess.run([m, b, avg_loss], test_feed_dict) print("m: {}, b: {}, test loss: {}".format(m_pred, b_pred, loss)) """ Explanation: Initializing variables and training Finally, it's time for training. Let's add an op to the graph that initializes all variables, then start a session and run the training code. End of explanation """
lrq3000/unireedsolomon
Generating the exponent and log tables.ipynb
mit
generator = ff.GF2int(3) generator """ Explanation: I used 3 as the generator for this field. For a field defined with the polynomial x^8 + x^4 + x^3 + x + 1, there may be other generators (I can't remember) End of explanation """ generator*generator generator*generator*generator generator**1 generator**2 generator**3 """ Explanation: Multiplying the generator by itself is the same as raising it to a power. I show up to the 3rd power here End of explanation """ generator.multiply(generator) generator.multiply(generator.multiply(generator)) """ Explanation: The slow multiply method implemented without the lookup table has the same results End of explanation """ exptable = [ff.GF2int(1), generator] for _ in range(254): # minus 2 because the first 2 elements are hardcoded exptable.append(exptable[-1].multiply(generator)) # Turn back to ints for a more compact print representation print([int(x) for x in exptable]) """ Explanation: We can enumerate the entire field by repeatedly multiplying by the generator. (The first element is 1 because generator^0 is 1). This becomes our exponent table. End of explanation """ exptable[5] == generator**5 all(exptable[n] == generator**n for n in range(256)) [int(x) for x in exptable] == [int(x) for x in ff.GF2int_exptable] """ Explanation: That's now our exponent table. We can look up the nth element in this list to get generator^n End of explanation """ logtable = [None for _ in range(256)] # Ignore the last element of the field because fields wrap back around. # The log of 1 could be 0 (g^0=1) or it could be 255 (g^255=1) for i, x in enumerate(exptable[:-1]): logtable[x] = i print([int(x) if x is not None else None for x in logtable]) [int(x) if x is not None else None for x in logtable] == [int(x) if x >= 0 else None for x in ff.GF2int_logtable] """ Explanation: The log table is the inverse function of the exponent table End of explanation """
liangjg/openmc
examples/jupyter/hexagonal-lattice.ipynb
mit
%matplotlib inline import openmc fuel = openmc.Material(name='fuel') fuel.add_nuclide('U235', 1.0) fuel.set_density('g/cm3', 10.0) fuel2 = openmc.Material(name='fuel2') fuel2.add_nuclide('U238', 1.0) fuel2.set_density('g/cm3', 10.0) water = openmc.Material(name='water') water.add_nuclide('H1', 2.0) water.add_nuclide('O16', 1.0) water.set_density('g/cm3', 1.0) materials = openmc.Materials((fuel, fuel2, water)) materials.export_to_xml() """ Explanation: In this example, we will create a hexagonal lattice and show how the orientation can be changed via the cell rotation property. Let's first just set up some materials and universes that we will use to fill the lattice. End of explanation """ r_pin = openmc.ZCylinder(r=0.25) fuel_cell = openmc.Cell(fill=fuel, region=-r_pin) water_cell = openmc.Cell(fill=water, region=+r_pin) pin_universe = openmc.Universe(cells=(fuel_cell, water_cell)) r_big_pin = openmc.ZCylinder(r=0.5) fuel2_cell = openmc.Cell(fill=fuel2, region=-r_big_pin) water2_cell = openmc.Cell(fill=water, region=+r_big_pin) big_pin_universe = openmc.Universe(cells=(fuel2_cell, water2_cell)) all_water_cell = openmc.Cell(fill=water) outer_universe = openmc.Universe(cells=(all_water_cell,)) """ Explanation: With our three materials, we will set up two universes that represent pin-cells: one with a small pin and one with a big pin. Since we will be using these universes in a lattice, it's always a good idea to have an "outer" universe as well that is applied outside the defined lattice. End of explanation """ lattice = openmc.HexLattice() """ Explanation: Now let's create a hexagonal lattice using the HexLattice class: End of explanation """ lattice.center = (0., 0.) lattice.pitch = (1.25,) lattice.outer = outer_universe """ Explanation: We need to set the center of the lattice, the pitch, an outer universe (which is applied to all lattice elements outside of those that are defined), and a list of universes. Let's start with the easy ones first. Note that for a 2D lattice, we only need to specify a single number for the pitch. End of explanation """ print(lattice.show_indices(num_rings=4)) """ Explanation: Now we need to set the universes property on our lattice. It needs to be set to a list of lists of Universes, where each list of Universes corresponds to a ring of the lattice. The rings are ordered from outermost to innermost, and within each ring the indexing starts at the "top". To help visualize the proper indices, we can use the show_indices() helper method. End of explanation """ outer_ring = [big_pin_universe] + [pin_universe]*17 # Adds up to 18 ring_1 = [big_pin_universe] + [pin_universe]*11 # Adds up to 12 ring_2 = [big_pin_universe] + [pin_universe]*5 # Adds up to 6 inner_ring = [big_pin_universe] """ Explanation: Let's set up a lattice where the first element in each ring is the big pin universe and all other elements are regular pin universes. From the diagram above, we see that the outer ring has 18 elements, the first ring has 12, and the second ring has 6 elements. The innermost ring of any hexagonal lattice will have only a single element. We build these rings through 'list concatenation' as follows: End of explanation """ lattice.universes = [outer_ring, ring_1, ring_2, inner_ring] print(lattice) """ Explanation: We can now assign the rings (and the universes they contain) to our lattice. End of explanation """ outer_surface = openmc.ZCylinder(r=5.0, boundary_type='vacuum') main_cell = openmc.Cell(fill=lattice, region=-outer_surface) geometry = openmc.Geometry([main_cell]) geometry.export_to_xml() """ Explanation: Now let's put our lattice inside a circular cell that will serve as the top-level cell for our geometry. End of explanation """ plot = openmc.Plot.from_geometry(geometry) plot.color_by = 'material' plot.colors = colors = { water: 'blue', fuel: 'olive', fuel2: 'yellow' } plot.to_ipython_image() """ Explanation: Now let's create a plot to see what our geometry looks like. End of explanation """ # Change the orientation of the lattice and re-export the geometry lattice.orientation = 'x' geometry.export_to_xml() # Run OpenMC in plotting mode plot.to_ipython_image() """ Explanation: At this point, if we wanted to simulate the model, we would need to create an instance of openmc.Settings, export it to XML, and run. Lattice orientation Now let's say we want our hexagonal lattice orientated such that two sides of the lattice are parallel to the x-axis. This can be achieved by two means: either we can rotate the cell that contains the lattice, or we can can change the HexLattice.orientation attribute. By default, the orientation is set to "y", indicating that two sides of the lattice are parallel to the y-axis, but we can also change it to "x" to make them parallel to the x-axis. End of explanation """ print(lattice.show_indices(4, orientation='x')) """ Explanation: When we change the orientation to 'x', you can see that the first universe in each ring starts to the right along the x-axis. As before, the universes are defined in a clockwise fashion around each ring. To see the proper indices for a hexagonal lattice in this orientation, we can again call show_indices but pass an extra orientation argument: End of explanation """ main_cell.region = openmc.model.hexagonal_prism( edge_length=4*lattice.pitch[0], orientation='x', boundary_type='vacuum' ) geometry.export_to_xml() # Run OpenMC in plotting mode plot.color_by = 'cell' plot.to_ipython_image() """ Explanation: Hexagonal prisms OpenMC also contains a convenience function that can create a hexagonal prism representing the interior region of six surfaces defining a hexagon. This can be useful as a bounding surface of a hexagonal lattice. For example, if we wanted the outer boundary of our geometry to be hexagonal, we could change the region of the main cell: End of explanation """
LorenzoBi/courses
TSAADS/tutorial 2/.ipynb_checkpoints/Untitled-checkpoint.ipynb
mit
import numpy as np import matplotlib.pyplot as plt import scipy.io as sio from sklearn import datasets, linear_model %matplotlib inline def set_data(p, x): temp = x.flatten() n = len(temp[p:]) x_T = temp[p:].reshape((n, 1)) X_p = np.ones((n, p + 1)) for i in range(1, p + 1): X_p[:, i] = temp[i - 1: i - 1 + n] return X_p, x_T def AR(coeff, init, T): offset = coeff[0] mult_coef = np.flip(coeff, 0)[:-1] series = np.zeros(T) for k, x_i in enumerate(init): series[k] = x_i for i in range(k + 1, T): series[i] = np.sum(mult_coef * series[i - k - 1:i]) + np.random.normal() + offset return series def estimated_autocorrelation(x): n = len(x) mu, sigma2 = np.mean(x), np.var(x) r = np.correlate(x - mu, x - mu, mode = 'full')[-n:] result = r/(sigma2 * (np.arange(n, 0, -1))) return result def test_AR(x, coef, N): x = x.flatten() offset = coef[0] slope = coef[1] ave_err = np.empty((len(x) - N, N)) x_temp = np.empty(N) for i in range(len(x) - N): x_temp[0] = x[i] * slope + offset for j in range(N -1): x_temp[j + 1] = x_temp[j] * slope + offset ave_err[i, :] = (x_temp - x[i:i+N])**2 return ave_err x = sio.loadmat('Tut2_file1.mat')['x'].flatten() plt.plot(x * 2, ',') plt.xlabel('time') plt.ylabel('x') X_p, x_T = set_data(1, x) model = linear_model.LinearRegression() model.fit(X_p, x_T) model.coef_ """ Explanation: Linear time series analysis - AR/MA models Lorenzo Biasi (3529646), Julius Vernie (3502879) Task 1. AR(p) models. 1.1 End of explanation """ x_1 = AR(np.append(model.coef_, 0), [0, x[0]], 50001) plt.plot(x_1[1:], ',') plt.xlabel('time') plt.ylabel('x') """ Explanation: We can see that simulating the data as an AR(1) model is not effective in giving us anything similar the aquired data. This is due to the fact that we made the wrong assumptions when we computed the coefficients of our data. Our data is in fact clearly not a stationary process and in particular cannot be from an AR(1) model alone, as there is a linear trend in time. The meaning of the slope that we computed shows that successive data points are strongly correlated. End of explanation """ rgr = linear_model.LinearRegression() x = x.reshape((len(x)), 1) t = np.arange(len(x)).reshape(x.shape) rgr.fit(t, x) x_star= x - rgr.predict(t) plt.plot(x_star.flatten(), ',') plt.xlabel('time') plt.ylabel('x') """ Explanation: 1.2 Before estimating the coefficients of the AR(1) model we remove the linear trend in time, thus making it resemble more closely the model with which we are trying to analyze it. End of explanation """ X_p, x_T = set_data(1, x_star) model.fit(X_p, x_T) model.coef_ x_1 = AR(np.append(model.coef_[0], 0), [0, x_star[0]], 50000) plt.plot(x_1, ',') plt.xlabel('time') plt.ylabel('x') plt.plot(x_star[1:], x_star[:-1], ',') plt.xlabel(r'x$_{t - 1}$') plt.ylabel(r'x$_{t}$') """ Explanation: This time we obtain different coefficients, that we can use to simulate the data and see if they give us a similar result the real data. End of explanation """ err = test_AR(x_star, model.coef_[0], 10) np.sum(err, axis=0) / err.shape[0] plt.plot(np.sum(err, axis=0) / err.shape[0], 'o', label='Error') plt.plot([0, 10.], np.ones(2)* np.var(x_star), 'r', label='Variance') plt.grid(linestyle='dotted') plt.xlabel(r'$\Delta t$') plt.ylabel('Error') """ Explanation: In the next plot we can see that our predicted values have an error that decays exponentially the further we try to make a prediction. By the time it arrives to 5 time steps of distance it equal to the variance. End of explanation """ x = sio.loadmat('Tut2_file2.mat')['x'].flatten() plt.plot(x, ',') plt.xlabel('time') plt.ylabel('x') np.mean(x) X_p, x_T = set_data(1, x) model = linear_model.LinearRegression() model.fit(X_p, x_T) model.coef_ """ Explanation: 1.4 By plotting the data we can already see that this cannot be a simple AR model. The data seems divided in 2 parts with very few data points in the middle. End of explanation """ x_1 = AR(model.coef_[0], x[:1], 50001) plt.plot(x_1[1:], ',') plt.xlabel('time') plt.ylabel('x') """ Explanation: We tried to simulate the data with these coefficients but it is clearly uneffective End of explanation """ plt.plot(x[1:], x[:-1], ',') plt.xlabel(r'x$_{t - 1}$') plt.ylabel(r'x$_{t}$') plt.plot(x_star[1:], x_star[:-1], ',') plt.xlabel(r'x$_{t - 1}$') plt.ylabel(r'x$_{t}$') """ Explanation: By plotting the return plot we can better understand what is going on. The data can be divided in two parts. We can see that successive data is always around one of this two poles. If it were a real AR model we would expect something like the return plots shown below this one. End of explanation """ plt.plot(estimated_autocorrelation(x)[:200]) plt.xlabel(r'$\Delta$t') plt.ylabel(r'$\rho$') plt.plot(estimated_autocorrelation(x_1.flatten())[:20]) plt.xlabel(r'$\Delta$t') plt.ylabel(r'$\rho$') """ Explanation: We can see that in the autocorelation plot the trend is exponential, which is what we would expect, but it is taking too long to decay for being a an AR model with small value of $p$ End of explanation """ data = sio.loadmat('Tut2_file3.mat') x_AR = data['x_AR'].flatten() x_MA = data['x_MA'].flatten() """ Explanation: Task 2. Autocorrelation and partial autocorrelation. 2.1 End of explanation """ for i in range(3,7): X_p, x_T = set_data(i, x_AR) model = linear_model.LinearRegression() model.fit(X_p, x_T) plt.plot(estimated_autocorrelation((x_T - model.predict(X_p)).flatten())[:20], \ label='AR(' + str(i) + ')') plt.xlabel(r'$\Delta$t') plt.ylabel(r'$\rho$') plt.legend() """ Explanation: For computing the $\hat p$ for the AR model we predicted the parameters $a_i$ for various AR(p). We find that for p = 6 we do not have any correlation between previous values and future values. End of explanation """ plt.plot(estimated_autocorrelation(x_MA)[:20]) plt.xlabel(r'$\Delta$t') plt.ylabel(r'$\rho$') test_AR(x, ) """ Explanation: For the MA $\hat q$ could be around 4-6 End of explanation """
UWSEDS/LectureNotes
Fall2018/09_UnitTests/unit-tests.ipynb
bsd-2-clause
import numpy as np # Code Under Test def entropy(ps): items = ps * np.log(ps) return np.abs(-np.sum(items)) # Smoke test entropy([0.2, 0.8]) """ Explanation: Unit Tests Overview and Principles Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test. There are two parts to writing tests. 1. invoking the code under test so that it is exercised in a particular way; 1. evaluating the results of executing code under test to determine if it behaved as expected. The collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage. For dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed. Test cases can be of several types. Below are listed some common classifications of test cases. - Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation. - One-shot test. In this case, you call the code under test with arguments for which you know the expected result. - Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs. - Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned. Another principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course. A best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do. Examples of Test Cases This section presents examples of test cases. The code under test is the calculation of entropy. Entropy of a set of probabilities $$ H = -\sum_i p_i \log(p_i) $$ where $\sum_i p_i = 1$. End of explanation """ # One-shot test. Need to know the correct answer. entries = [ [0, [1]], ] for entry in entries: ans = entry[0] prob = entry[1] if not np.isclose(entropy(prob), ans): print("Test failed!") print ("Test completed!") """ Explanation: Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1. What is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result! End of explanation """ # Edge test. This is something that should cause an exception. entropy([-0.5]) """ Explanation: Question: What is an example of another one-shot test? (Hint: You need to know the expected result.) One edge test of interest is to provide an input that is not a distribution in that probabilities don't sum to 1. End of explanation """ # Pattern test def test_equal_probabilities(n): prob = 1.0/n ps = np.repeat(prob , n) if np.isclose(entropy(ps), -np.log(prob)): print("Worked!") else: import pdb; pdb.set_trace() print ("Bad result.") # Run a test test_equal_probabilities(100000) """ Explanation: Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$. $$ H = -\sum_{i=1}^{n} p_i \log(p_i) = -\sum_{i=1}^{n} \frac{1}{n} \log(\frac{1}{n}) = n (-\frac{1}{n} \log(\frac{1}{n}) ) = -\log(\frac{1}{n}) $$ For example, entropy([0.5, 0.5]) should be $-log(0.5)$. End of explanation """ import unittest # Define a class in which the tests will run class UnitTests(unittest.TestCase): # Each method in the class to execute a test def test_success(self): self.assertEqual(1, 1) def test_success1(self): self.assertTrue(1 == 1) def test_failure(self): self.assertLess(1, 2) suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests) _ = unittest.TextTestRunner().run(suite) # Function the handles test loading #def test_setup(argument ?): """ Explanation: You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better. Unittest Infrastructure There are several reasons to use a test infrastructure: - If you have many test cases (which you should!), the test infrastructure will save you from writing a lot of code. - The infrastructure provides a uniform way to report test results, and to handle test failures. - A test infrastructure can tell you about coverage so you know what tests to add. We'll be using the unittest framework. This is a separate Python package. Using this infrastructure, requires the following: 1. import the unittest module 1. define a class that inherits from unittest.TestCase 1. write methods that run the code to be tested and check the outcomes. The last item has two subparts. First, we must identify which methods in the class inheriting from unittest.TestCase are tests. You indicate that a method is to be run as a test by having the method name begin with "test". Second, the "test methods" should communicate with the infrastructure the results of evaluating output from the code under test. This is done by using assert statements. For example, self.assertEqual takes two arguments. If these are objects for which == returns True, then the test passes. Otherwise, the test fails. End of explanation """ # Implementating a pattern test. Use functions in the test. import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): def test_equal_probability(self): def test(count): """ Invokes the entropy function for a number of values equal to count that have the same probability. :param int count: """ raise RuntimeError ("Not implemented.") # test(2) test(20) test(200) #test_setup(TestEntropy) import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): """Write the full set of tests.""" """ Explanation: Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach. As expected, the first test passes, but the second test fails. Exercise Rewrite the above one-shot test for entropy using the unittest infrastructure. End of explanation """ import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): def test_invalid_probability(self): try: entropy([0.1, 0.5]) self.assertTrue(False) except ValueError: self.assertTrue(True) #test_setup(TestEntropy) """ Explanation: Testing For Exceptions Edge test cases often involves handling exceptions. One approach is to code this directly. End of explanation """ import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): def test_invalid_probability(self): with self.assertRaises(ValueError): entropy([0.1, 0.5]) suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy) _ = unittest.TextTestRunner().run(suite) """ Explanation: unittest provides help with testing exceptions. End of explanation """ import unittest # Define a class in which the tests will run class TestEntryopy(unittest.TestCase): def test_oneshot(self): self.assertEqual(geomean([1,1]), 1) def test_oneshot2(self): self.assertEqual(geomean([3, 3, 3]), 3) #test_setup(TestGeomean) #def geomean(argument?): # return ? """ Explanation: Test Files Although I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py. The structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example. Discussion Question: What tests would you write for a plotting function? Test Driven Development Start by writing the tests. Then write the code. We illustrate this by considering a function geomean that takes a list of numbers as input and produces the geometric mean on output. End of explanation """
NuSTAR/nustar_pysolar
notebooks/Convert_Example.ipynb
mit
import sys from os.path import * import os # For loading the NuSTAR data from astropy.io import fits # Load the NuSTAR python libraries from nustar_pysolar import convert, utils """ Explanation: Code for converting an observation to solar coordinates Step 1: Run the pipeline on the data to get mode06 files with the correct status bit setting. Note that as of nustardas verion 1.6.0 you can now set the "runsplitsc" keyword to automatically split the CHU combinations for mode06 into separate data files. These files will be stored in the event_cl output directory and have filenames like: nu20201001001A06_chu2_N_cl.evt Optional: Check and see how much exposure is in each file. Use the Observation Report Notebook example to see how to do this. Step 2: Convert the data to heliocentric coordinates. Below uses the nustar.convert methods to change the image to heliocentric coordinates from RA/dec coordinates. Load the python libraries that we're going to use: End of explanation """ #infile = '/Users/bwgref/science/solar/july_2016/data/20201002001/event_cl/nu20201002001B06_chu3_N_cl.evt' infile = '/Users/bwgref/science/solar/data/Sol_16208/20201002001/event_cl/nu20201002001B06_chu3_N_cl.evt' hdulist = fits.open(infile) evtdata = hdulist[1].data hdr = hdulist[1].header hdulist.close() """ Explanation: Get the data from the FITS file. Here we loop over the header keywords to get the correct columns for the X/Y coordinates. We also parse the FITS header to get the data we need to project the X/Y values (which are integers from 0-->1000) into RA/dec coordinates. End of explanation """ reload(convert) (newdata, newhdr) = convert.to_solar(evtdata, hdr) """ Explanation: Rotate to solar coordinates: Variation on what we did to setup the pointing. Note that this can take a little bit of time to run (~a minute or two). The important optin here is how frequently one wants to recompute the position of the Sun. The default is once every 5 seconds. Note that this can take a while (~minutes), so I recommend saving the output as a new FITS file (below). End of explanation """ # # Make the new filename: (sfile, ext)=splitext(infile) outfile=sfile+'_sunpos.evt' # Remove output file if necessary if isfile(outfile): print(outfile, 'exists! Removing old version...') os.remove(outfile) fits.writeto(outfile, newdata, newhdr) """ Explanation: Write the output to a new FITS file. Below keeps the RAWX, RAWY, DET_ID, GRADE, and PI columns from the original file. It repalces the X/Y columns with the new sun_x, sun_y columns. End of explanation """ convert.convert_file(infile) """ Explanation: Alternatively, use the convenience wrapper which automatically adds on the _sunpos.evt suffix: End of explanation """
tensorflow/docs-l10n
site/ja/addons/tutorials/layers_weightnormalization.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ !pip install -U tensorflow-addons import tensorflow as tf import tensorflow_addons as tfa import numpy as np from matplotlib import pyplot as plt # Hyper Parameters batch_size = 32 epochs = 10 num_classes=10 """ Explanation: TensorFlow Addons レイヤー : 重み正規化 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/addons/tutorials/layers_weightnormalization"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/layers_weightnormalization.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/layers_weightnormalization.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/addons/tutorials/layers_weightnormalization.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td> </table> 概要 このノートブックでは、重み正規化レイヤーの使用方法と収束性向上の方法を説明します。 重み正規化 ディープニューラルネットワークのトレーニングを高速化する単純な再パラメータ化である。 Tim Salimans、Diederik P. Kingma (2016) この方法で重みを再パラメータ化することにより、最適化問題の条件付けを改善し、確率的勾配降下法の収束を高速化します。我々の再パラメータ化はバッチ正規化から着想を得ていますが、ミニバッチ内で例と例の間の依存性は導入していません。つまり、この手法は LSTM のような再帰モデルや、深層強化学習や生成モデルのようなバッチ正規化があまり適していないノイズに敏感なアプリケーションにもうまく適用できることを意味します。この手法ははるかに単純ではありますが、完全なバッチ正規化の高速化の大部分を提供します。さらに、我々の手法の計算オーバーヘッドは低くなるため、同じ時間内により多くの最適化ステップを実行することができます。 https://arxiv.org/abs/1602.07868 <img src="https://raw.githubusercontent.com/seanpmorgan/tf-weightnorm/master/static/wrapped-graph.png"><br><br> セットアップ End of explanation """ # Standard ConvNet reg_model = tf.keras.Sequential([ tf.keras.layers.Conv2D(6, 5, activation='relu'), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(16, 5, activation='relu'), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(120, activation='relu'), tf.keras.layers.Dense(84, activation='relu'), tf.keras.layers.Dense(num_classes, activation='softmax'), ]) # WeightNorm ConvNet wn_model = tf.keras.Sequential([ tfa.layers.WeightNormalization(tf.keras.layers.Conv2D(6, 5, activation='relu')), tf.keras.layers.MaxPooling2D(2, 2), tfa.layers.WeightNormalization(tf.keras.layers.Conv2D(16, 5, activation='relu')), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Flatten(), tfa.layers.WeightNormalization(tf.keras.layers.Dense(120, activation='relu')), tfa.layers.WeightNormalization(tf.keras.layers.Dense(84, activation='relu')), tfa.layers.WeightNormalization(tf.keras.layers.Dense(num_classes, activation='softmax')), ]) """ Explanation: モデルを構築する End of explanation """ (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() # Convert class vectors to binary class matrices. y_train = tf.keras.utils.to_categorical(y_train, num_classes) y_test = tf.keras.utils.to_categorical(y_test, num_classes) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 """ Explanation: データを読み込む End of explanation """ reg_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) reg_history = reg_model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test), shuffle=True) wn_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) wn_history = wn_model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test), shuffle=True) reg_accuracy = reg_history.history['accuracy'] wn_accuracy = wn_history.history['accuracy'] plt.plot(np.linspace(0, epochs, epochs), reg_accuracy, color='red', label='Regular ConvNet') plt.plot(np.linspace(0, epochs, epochs), wn_accuracy, color='blue', label='WeightNorm ConvNet') plt.title('WeightNorm Accuracy Comparison') plt.legend() plt.grid(True) plt.show() """ Explanation: モデルをトレーニングする End of explanation """
rudyryk/LearnAI
notebooks/2_fullyconnected.ipynb
unlicense
# These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import tensorflow as tf from six.moves import cPickle as pickle from six.moves import range """ Explanation: Deep Learning Assignment 2 Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset. The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow. End of explanation """ data_root = '../data' # Change me to store data elsewhere pickle_file = os.path.join(data_root, 'notMNIST.pickle') with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) """ Explanation: First reload the data we generated in 1_notmnist.ipynb. End of explanation """ image_size = 28 num_labels = 10 def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) """ Explanation: Reformat into a shape that's more adapted to the models we're going to train: - data as a flat matrix, - labels as float 1-hot encodings. End of explanation """ # With gradient descent training, even this much data is prohibitive. # Subset the training data for faster turnaround. train_subset = 10000 graph = tf.Graph() with graph.as_default(): # Input data. # Load the training, validation and test data into constants that are # attached to the graph. tf_train_dataset = tf.constant(train_dataset[:train_subset]) tf_train_labels = tf.constant(train_labels[:train_subset]) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. # These are the parameters that we are going to be training. The weight # matrix will be initialized using random values following a (truncated) # normal distribution. The biases get initialized to zero. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. # We multiply the inputs with the weight matrix, and add biases. We compute # the softmax and cross-entropy (it's one operation in TensorFlow, because # it's very common, and it can be optimized). We take the average of this # cross-entropy across all training examples: that's our loss. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits)) # Optimizer. # We are going to find the minimum of this loss using gradient descent. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. # These are not part of training, but merely here so that we can report # accuracy figures as we train. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) """ Explanation: We're first going to train a multinomial logistic regression using simple gradient descent. TensorFlow works like this: * First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below: with graph.as_default(): ... Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below: with tf.Session(graph=graph) as session: ... Let's load all the data into TensorFlow and build the computation graph corresponding to our training: End of explanation """ num_steps = 801 def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0]) with tf.Session(graph=graph) as session: summary_writer = tf.summary.FileWriter('../logs', graph=graph) # This is a one-time operation which ensures the parameters get initialized as # we described in the graph: random weights for the matrix, zeros for the # biases. tf.global_variables_initializer().run() print('Initialized') for step in range(num_steps): # Run the computations. We tell .run() that we want to run the optimizer, # and get the loss value and the training predictions returned as numpy # arrays. _, l, predictions = session.run([optimizer, loss, train_prediction]) if (step % 100 == 0): print('Loss at step %d: %f' % (step, l)) print('Training accuracy: %.1f%%' % accuracy( predictions, train_labels[:train_subset, :])) # Calling .eval() on valid_prediction is basically like calling run(), but # just to get that one numpy array. Note that it recomputes all its graph # dependencies. print('Validation accuracy: %.1f%%' % accuracy( valid_prediction.eval(), valid_labels)) merged_summary = tf.summary.merge_all() print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels)) """ Explanation: Let's run this computation and iterate: End of explanation """ batch_size = 128 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) """ Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster. The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run(). End of explanation """ num_steps = 3001 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) """ Explanation: Let's run it: End of explanation """ relu_size = 1024 input_size = image_size * image_size num_steps = 3001 batch_size = 128 graph = tf.Graph() with graph.as_default(): tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, input_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) def create_relu_model(x, weights, biases): # Layer 1: W1 * X + B1 -> ReLu layer_1 = tf.matmul(x, weights['n1']) + biases['n1'] layer_1 = tf.nn.relu(layer_1) # Output layer: W2 * X + B2 -> Output out_layer = tf.matmul(layer_1, weights['out']) + biases['out'] return out_layer # Simple ReLu model weights = { 'n1': tf.Variable(tf.truncated_normal([input_size, relu_size])), 'out': tf.Variable(tf.truncated_normal([relu_size, num_labels])), } biases = { 'n1': tf.Variable(tf.zeros([relu_size])), 'out': tf.Variable(tf.zeros([num_labels])), } relu_model = create_relu_model(tf_train_dataset, weights, biases) # Loss function loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( labels=tf_train_labels, logits=relu_model)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(relu_model) valid_prediction = tf.nn.softmax( create_relu_model(tf_valid_dataset, weights, biases)) test_prediction = tf.nn.softmax( create_relu_model(tf_test_dataset, weights, biases)) with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy( predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) """ Explanation: Problem Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy. End of explanation """
m3at/Labelizer
Labelizer_part1.ipynb
mit
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py %load_ext watermark # for reproducibility %watermark -a 'Paul Willot' -mvp numpy,scipy,spacy """ Explanation: Extracting Structure from Scientific Abstracts using a LSTM neural network Paul Willot This project was made for the ICADL 2015 conference. In this notebook we will go through all steps required to build a LSTM neural network to classify sentences inside a scientific paper abstract. Summary: * Extract dataset * Pre-process * Label analysis * Choosing labels * Create train and test set End of explanation """ !wget https://www.dropbox.com/s/lhqe3bls0mkbq57/pubmed_result_548899.txt.zip -P ./data/ !unzip -o ./data/pubmed_result_548899.txt.zip -d ./data/ """ Explanation: First, let's gather some data. We use the PubMed database of medical paper. Specificaly, we will focus on structured abstracts. There is approximately 3 million avalaible, and we will focus on a reduced portion of this (500.000) but feel free to use a bigger corpus. The easiest way to try this is to use the toy_corpus.txt and tokenizer.pickle included in the project repo. To work on real dataset, for convenience I prepared the following files. Use the one appropriate for your needs, for example you can download the training and testing datas and jump to the next notebook. Download the full corpus (~500.000 structured abstracts, 500 MB compressed) End of explanation """ #!wget https://www.dropbox.com/s/ujo1l8duu31js34/toy_corpus.txt.zip -P ./data/ #!unzip -o ./TMP/toy_corpus.txt.zip -d ./data/ """ Explanation: Download a toy corpus (224 structured abstracts, 200 KB compressed) Note: this file is already included in the project GitHub repository. End of explanation """ !wget https://www.dropbox.com/s/lmv88n1vpmp6c19/corpus_lemmatized.pickle.zip -P ./data/ !unzip -o ./data/corpus_lemmatized.pickle.zip -d ./data/ """ Explanation: Download a lemmatized corpus (preprocessed, 350 MB compressed) End of explanation """ !wget https://www.dropbox.com/s/0o7i0ejv4aqf6gs/training_4_BacObjMetCon.pickle.zip -P ./data/ !unzip -o ./data/training_4_BacObjMetCon.pickle.zip -d ./data/ """ Explanation: Download training and testing datas for the LSTM (preprocessed, vectorized and splitted, 100 MB compressed) End of explanation """ from __future__ import absolute_import from __future__ import print_function # import local libraries import tools import prepare import lemmatize import analyze import preprocess """ Explanation: Some imports End of explanation """ data = prepare.extract_txt('data/toy_corpus.txt') """ Explanation: <a id='extract'></a> Extract and parse the dataset Separate each documents, isolate the abstracts End of explanation """ print("%s\n[...]"%data[0][:800]) abstracts = prepare.get_abstracts(data) """ Explanation: Our data currently look like this: End of explanation """ def remove_err(datas,errs): err=sorted([item for subitem in errs for item in subitem],reverse=True) for e in err: for d in datas: del d[e] remove_err([abstracts],prepare.get_errors(abstracts)) print("Working on %d documents."%len(abstracts)) """ Explanation: Cleaning, dumping the abstracts with incorrect number of labels End of explanation """ abstracts = prepare.filter_numbers(abstracts) """ Explanation: <a id='pre-process'></a> Pre-process Replacing numbers with ##NB. End of explanation """ tokenizer = prepare.create_sentence_tokenizer(abstracts) # For a more general parser, use the one provided in NLTK: #import nltk.data #tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') abstracts_labeled = prepare.ex_all_labels(abstracts,tokenizer) """ Explanation: For correct sentence splitting, we train a tokenizer using NLTK Punkt Sentence Tokenizer. This tokenizer use an unsupervised algorithm to learn how to split sentences on a corpus. End of explanation """ abstracts_labeled[0][0] """ Explanation: Our data look now like this: End of explanation """ lemmatized = lemmatize.lemm(abstracts_labeled) lemmatized[0] """ Explanation: Lemmatization It may be a long process on huge dataset, but using spacy make it currently 50 times faster than a slimple use of the NLTK tools. It get a huge speedup with paralellisation (tryed on 80 cores). Specify nb_core=X if needed. End of explanation """ tools.dump_pickle(lemmatized,"data/fast_lemmatized.pickle") """ Explanation: Let's save that End of explanation """ lemmatized = tools.load_pickle("data/corpus_lemmatized.pickle") """ Explanation: To directly load a lemmatized corpus End of explanation """ dic = analyze.create_dic_simple(lemmatized) print("Number of labels :",len(dic.keys())) analyze.show_keys(dic,threshold=10) primary_keyword=['AIM','BACKGROUND','INTRODUCTION','METHOD','RESULT','CONCLUSION','OBJECTIVE','DESIGN','FINDING','OUTCOME','PURPOSE'] analyze.regroup_keys(dic,primary_keyword) analyze.show_keys(dic,threshold=10) keys_to_replace = [['INTRODUCTION','CONTEXT','PURPOSE'], ['AIM','SETTING'], ['FINDING','OUTCOME','DISCUSSION']] replace_with = ['BACKGROUND', 'METHOD', 'CONCLUSION'] analyze.replace_keys(dic,keys_to_replace,replace_with) analyze.show_keys(dic,threshold=10) """ Explanation: <a id='label analysis'></a> Label analysis Does not affect the corpus, we simply do this get some insights. End of explanation """ pattern = [ ['BACKGROUND','BACKGROUNDS'], ['METHOD','METHODS'], ['RESULT','RESULTS'], ['CONCLUSION','CONCLUSIONS'], ] sub_perfect = analyze.get_exactly(lemmatized,pattern=pattern,no_truncate=True) sub_perfect = analyze.get_exactly(lemmatized,pattern=pattern,no_truncate=False) print("%d abstracts labeled and ready for the next part"%len(sub_perfect)) """ Explanation: <a id='choosing labels'></a> Choosing labels Does affect the corpus We can restrict our data to work only on abstracts having labels maching a specific pattern... End of explanation """ dic = preprocess.create_dic(lemmatized,100) # We can re-use the variables defined in the analysis section #primary_keyword=['AIM','BACKGROUND','METHOD','RESULT','CONCLUSION','OBJECTIVE','DESIGN','FINDINGS','OUTCOME','PURPOSE'] analyze.regroup_keys(dic,primary_keyword) #keys_to_replace = [['INTRODUCTION','BACKGROUND','AIM','PURPOSE','CONTEXT'], # ['CONCLUSION']] #replace_with = ['OBJECTIVE', # 'RESULT'] analyze.replace_keys(dic,keys_to_replace,replace_with) # We can restrict our analysis to the main labels dic = {key:dic[key] for key in ['BACKGROUND','RESULT','METHOD','CONCLUSION']} analyze.show_keys(dic,threshold=10) print("Sentences per label :",["%s %d"%(s,len(dic[s][1])) for s in dic.keys()]) """ Explanation: ... Or we can keep a more noisy dataset and reduce it to a set of labels End of explanation """ classes_names = ['BACKGROUND', 'METHOD', 'RESULT','CONCLUSION'] dic.keys() # train/test split split = 0.8 # truncate the number of abstracts to consider for each label, # -1 to set to the maximum while keeping the number of sentences per labels equal raw_x_train, raw_y_train, raw_x_test, raw_y_test = preprocess.split_data(dic,classes_names, split_train_test=split, truncate=-1) """ Explanation: <a id='create train'></a> Creating train and test data Let's format the datas for the classifier Reorder the labels for better readability End of explanation """ X_train, y_train, X_test, y_test, feature_names, max_features, vectorizer = preprocess.vectorize_data(raw_x_train, raw_y_train, raw_x_test, raw_y_test) print("Number of features : %d"%(max_features)) """ Explanation: Vectorize the sentences. End of explanation """ tools.dump_pickle([X_train, y_train, X_test, y_test, feature_names, max_features, classes_names, vectorizer], "data/unpadded_4_BacObjMetCon.pickle") """ Explanation: Now let's save all this End of explanation """
ledeprogram/algorithms
class7/donow/hon_jingyi_donow_7.ipynb
gpl-3.0
import pandas as pd %matplotlib inline import numpy as np from sklearn.linear_model import LogisticRegression """ Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination 1. Import the necessary packages to read in the data, plot, and create a logistic regression model End of explanation """ df = pd.read_csv("hanford.csv") df """ Explanation: 2. Read in the hanford.csv file in the data/ folder End of explanation """ df.describe() """ Explanation: <img src="../../images/hanford_variables.png"></img> 3. Calculate the basic descriptive statistics on the data End of explanation """ df['High_Exposure'] = df['Exposure'].apply(lambda x:1 if x > 3.41 else 0) """ Explanation: 4. Find a reasonable threshold to say exposure is high and recode the data End of explanation """ lm = LogisticRegression() x = np.asarray(dataset[['Mortality']]) y = np.asarray(dataset['Exposure']) lm = lm.fit(x,y) """ Explanation: 5. Create a logistic regression model End of explanation """
zhuanxuhit/deep-learning
embeddings/Skip-Grams-Solution.ipynb
mit
import time import numpy as np import tensorflow as tf import utils """ Explanation: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. End of explanation """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() """ Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. End of explanation """ words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) """ Explanation: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. End of explanation """ vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] """ Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. End of explanation """ from collections import Counter import random threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} train_words = [word for word in int_words if random.random() < (1 - p_drop[word])] """ Explanation: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words. End of explanation """ def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = set(words[start:idx] + words[idx+1:stop+1]) return list(target_words) get_tar """ Explanation: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window. End of explanation """ def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y """ Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. End of explanation """ train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels') """ Explanation: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1. End of explanation """ n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs) """ Explanation: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. End of explanation """ # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) """ Explanation: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works. End of explanation """ with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) """ Explanation: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. End of explanation """ with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) """ Explanation: Restore the trained network if you need to: End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) """ Explanation: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. End of explanation """
mne-tools/mne-tools.github.io
0.21/_downloads/2567f25ca4c6b483c12d38184d7fe9d7/plot_decoding_xdawn_eeg.ipynb
bsd-3-clause
# Authors: Alexandre Barachant <alexandre.barachant@gmail.com> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import StratifiedKFold from sklearn.pipeline import make_pipeline from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report, confusion_matrix from sklearn.preprocessing import MinMaxScaler from mne import io, pick_types, read_events, Epochs, EvokedArray from mne.datasets import sample from mne.preprocessing import Xdawn from mne.decoding import Vectorizer print(__doc__) data_path = sample.data_path() """ Explanation: XDAWN Decoding From EEG data ERP decoding with Xdawn ([1], [2]). For each event type, a set of spatial Xdawn filters are trained and applied on the signal. Channels are concatenated and rescaled to create features vectors that will be fed into a logistic regression. End of explanation """ raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.1, 0.3 event_id = {'Auditory/Left': 1, 'Auditory/Right': 2, 'Visual/Left': 3, 'Visual/Right': 4} n_filter = 3 # Setup for reading the raw data raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(1, 20, fir_design='firwin') events = read_events(event_fname) picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False, exclude='bads') epochs = Epochs(raw, events, event_id, tmin, tmax, proj=False, picks=picks, baseline=None, preload=True, verbose=False) # Create classification pipeline clf = make_pipeline(Xdawn(n_components=n_filter), Vectorizer(), MinMaxScaler(), LogisticRegression(penalty='l1', solver='liblinear', multi_class='auto')) # Get the labels labels = epochs.events[:, -1] # Cross validator cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=42) # Do cross-validation preds = np.empty(len(labels)) for train, test in cv.split(epochs, labels): clf.fit(epochs[train], labels[train]) preds[test] = clf.predict(epochs[test]) # Classification report target_names = ['aud_l', 'aud_r', 'vis_l', 'vis_r'] report = classification_report(labels, preds, target_names=target_names) print(report) # Normalized confusion matrix cm = confusion_matrix(labels, preds) cm_normalized = cm.astype(float) / cm.sum(axis=1)[:, np.newaxis] # Plot confusion matrix fig, ax = plt.subplots(1) im = ax.imshow(cm_normalized, interpolation='nearest', cmap=plt.cm.Blues) ax.set(title='Normalized Confusion matrix') fig.colorbar(im) tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) fig.tight_layout() ax.set(ylabel='True label', xlabel='Predicted label') """ Explanation: Set parameters and read data End of explanation """ fig, axes = plt.subplots(nrows=len(event_id), ncols=n_filter, figsize=(n_filter, len(event_id) * 2)) fitted_xdawn = clf.steps[0][1] tmp_info = epochs.info.copy() tmp_info['sfreq'] = 1. for ii, cur_class in enumerate(sorted(event_id)): cur_patterns = fitted_xdawn.patterns_[cur_class] pattern_evoked = EvokedArray(cur_patterns[:n_filter].T, tmp_info, tmin=0) pattern_evoked.plot_topomap( times=np.arange(n_filter), time_format='Component %d' if ii == 0 else '', colorbar=False, show_names=False, axes=axes[ii], show=False) axes[ii, 0].set(ylabel=cur_class) fig.tight_layout(h_pad=1.0, w_pad=1.0, pad=0.1) """ Explanation: The patterns_ attribute of a fitted Xdawn instance (here from the last cross-validation fold) can be used for visualization. End of explanation """
gcallah/Indra
notebooks/Agent.ipynb
gpl-3.0
cd .. from indra2.agent import Agent """ Explanation: Indra Agent Class agent.py is the base class of all agents, environments, and objects contained in an environment. Its basic character is that it is a vector, and supports basic vector and matrix operations. End of explanation """ def newt_action(agent): print("I'm " + agent.name + " and I'm inventing modern mechanics!") newton = Agent("Newton", attrs={"place": 0.0, "time": 1658.0, "achieve": 43.9}, action=newt_action, duration=30) """ Explanation: Agent class constructor accepts 5 parameters: name attrs action duration groups Lets create an agent called newton. End of explanation """ len(newton) """ Explanation: Now we will explore all the magic methods of agent class. len returns number of attributes of an agent End of explanation """ str(newton) """ Explanation: str returns name of the agent End of explanation """ newton['time'] """ Explanation: getitem returns value of an attribute End of explanation """ newton['place'] = 2.5 newton['place'] """ Explanation: setitem sets/changes value of an attribute. Returns keyerror if agent doesn't have the attribute. End of explanation """ "time" in newton """ Explanation: contains returns true if agent contains the attribute End of explanation """ for attr in newton: print(attr) """ Explanation: iter loops over attributes of an agent End of explanation """ for attr in reversed(newton): print(attr) """ Explanation: reversed loops over attributes in reverse order End of explanation """ LEIBBYEAR = 1646 LEIBDYEAR = 1716 def leib_action(agent): print("I'm " + agent.name + " and I'm inventing calculus!") leibniz = Agent("Leibniz", attrs={"place": 0.0, "time": LEIBBYEAR}, action=leib_action, duration=20) other_Leibniz = Agent("Leibniz", attrs={"place": 1.0, "time": LEIBBYEAR}, action=leib_action, duration=20) print("Leibniz & othere_Leibniz:", leibniz == other_Leibniz) print("Leibniz & Leibniz:", leibniz == leibniz) print("Leibniz & Newton:", leibniz == newton) """ Explanation: eq checks if two agents are equivalent End of explanation """ repr(leibniz) """ Explanation: repr End of explanation """ newton() leibniz() """ Explanation: call: Agents will 'act' by being called as a function. If the agent has no act() function, do nothing. Agents should return True if they did, in fact,'do something,' or False if they did not. End of explanation """ newton newton += 2 newton newton += 2 newton """ Explanation: iadd increases value of each attributes End of explanation """ newton -= 2 newton """ Explanation: isub this is opposite of iadd - substracts values of attributes. End of explanation """ newton *= 2 newton """ Explanation: imul multiplies each attributes End of explanation """ import composite comp = newton + leibniz comp """ Explanation: add adds two agents making it a composite/group. Name of the composite/group is concatinated names of all the agents. As shown below, agents become members of the composite. End of explanation """ newton.to_json() ModernNewton = Agent("ModerNewton", attrs={"place": 0.0, "time": 1658.0, "achieve": 43.9}, action=newt_action, duration=30) """ Explanation: Now we will explore general class methods to_json() - Returns json respresentation of the agent. End of explanation """ ModernNewton.join_group(comp) ModernNewton comp """ Explanation: join_group End of explanation """ newton.same_type(leibniz) newton.same_type(newton) """ Explanation: same_type - Returns true if agents are of same type. End of explanation """ newton.attrs_to_dict() """ Explanation: attrs_to_dict - returns ordered dictionary representing attributes End of explanation """ newton.sum() """ Explanation: sum - Returns sum of all the attributes End of explanation """ newton.magnitude() """ Explanation: magnitude - End of explanation """ # newton.die() """ Explanation: die - makes agent inactive End of explanation """ newton newton.set_pos(100,100) newton """ Explanation: is_active - returns true if agent is active set_pos, get_pos, get_x, get_y - sets or returns x,y co-ordinates of the agent. End of explanation """
Jackporter415/phys202-2015-work
assignments/midterm/InteractEx06.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from IPython.display import Image from IPython.html.widgets import interact, interactive, fixed """ Explanation: Interact Exercise 6 Imports Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell. End of explanation """ Image('fermidist.png') """ Explanation: Exploring the Fermi distribution In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is: End of explanation """ def fermidist(energy, mu, kT): """Compute the Fermi distribution at energy, mu and kT.""" # YOUR CODE HERE F = 1/(np.exp((energy-mu)/kT)+1) return F assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033) assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0), np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532, 0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ])) """ Explanation: In this equation: $\epsilon$ is the single particle energy. $\mu$ is the chemical potential, which is related to the total number of particles. $k$ is the Boltzmann constant. $T$ is the temperature in Kelvin. In the cell below, typeset this equation using LaTeX: $F(\epsilon)=\frac{1}{e^\frac{(\epsilon-\mu)}{kT}+1}$ Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code. End of explanation """ def plot_fermidist(mu, kT): ax = plt.gca() energy = np.arange(0,11.0) plt.plot(energy,fermidist(energy,mu,kT)) plt.ylim(0,2.0) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plot_fermidist(4.0, 1.0) assert True # leave this for grading the plot_fermidist function """ Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT. Use enegies over the range $[0,10.0]$ and a suitable number of points. Choose an appropriate x and y limit for your visualization. Label your x and y axis and the overall visualization. Customize your plot in 3 other ways to make it effective and beautiful. End of explanation """ interact(plot_fermidist,mu = [0.0,5.0], kT = [0.1,10.0]) """ Explanation: Use interact with plot_fermidist to explore the distribution: For mu use a floating point slider over the range $[0.0,5.0]$. for kT use a floating point slider over the range $[0.1,10.0]$. End of explanation """
ewulczyn/talk_page_abuse
src/analysis/Characterizing Attackers and Victims.ipynb
apache-2.0
%load_ext autoreload %autoreload 2 %matplotlib inline import warnings warnings.filterwarnings('ignore') import matplotlib.pyplot as plt import seaborn as sns import numpy as np import pandas as pd from load_utils import * from analysis_utils import compare_groups,get_genders d = load_diffs() df_events, df_blocked_user_text = load_block_events_and_users() """ Explanation: Characterizing Attackers How many attackers are vandals vs normal users vs power users ? are victims also trolls ? investigate anons, gender, tenure, status End of explanation """ d['blocked'].groupby('user_text')['pred_aggression_score']\ .agg( {'aggressiveness': np.mean})\ .hist(bins = 50) plt.xlabel('user level aggression score') plt.ylabel('num users') plt.title('') # lets exclude anons d['blocked'].query('not author_anon').groupby('user_text')['pred_aggression_score']\ .agg( {'aggressiveness': np.mean})\ .hist(bins = 50) plt.xlabel('user level aggression score') plt.ylabel('num users') plt.title('') # lets compare to non-blocked users # NOTE: would be better to have taken a random sample of users d['2015'].query('not author_anon').groupby('user_text')['pred_aggression_score']\ .agg( {'aggressiveness': np.mean})\ .hist(bins = 50) plt.xlabel('user level aggression score') plt.ylabel('num users') plt.title('') """ Explanation: Attacker Specific Analysis Q: Are users blocked for personal attacks pure trolls/vandals? Methodology: Consider users blocked for harassment. Compute histogram over mean user level aggression and attack scores. Aggression End of explanation """ d['blocked'].groupby('user_text')['pred_recipient_score']\ .agg( {'aggressiveness': np.mean}).hist(bins = 30) plt.xlabel('user level attack score') plt.ylabel('num users') plt.title('') d['blocked'].query('not author_anon').groupby('user_text')['pred_recipient_score']\ .agg( {'aggressiveness': np.mean}).hist(bins = 30) plt.xlabel('user level attack score') plt.ylabel('num users') plt.title('') d['2015'].query('not author_anon').groupby('user_text')['pred_recipient_score']\ .agg( {'aggressiveness': np.mean})\ .hist(bins = 50) plt.xlabel('user level attack score') plt.ylabel('num users') plt.title('') """ Explanation: Attack End of explanation """ # TODO """ Explanation: Q: Do attacks come from pure trolls or from users who are generally non-attacking? For different thresholds, assign each attack its users aggression score. Plot cdfs over aggression scores. End of explanation """ # TODO """ Explanation: Victim Specific Analysis Q Are victims also trolls? End of explanation """ o = (False, True) x = 'author_anon' compare_groups(d['sample'][:100000], x, order = o) """ Explanation: Shared Analysis Q: How do comments made by registered and anonymous authors compare? End of explanation """ # don't count posts to own article o = (False, True) x = 'recipient_anon' compare_groups(d['sample'][:100000].query('not own_page'), x, order = o) """ Explanation: Q: How do comments received by registered and anonymous authors compare? End of explanation """ x = 'own_page' o = (False, True) compare_groups(d['sample'][:100000], x, order = o) x = 'own_page' compare_groups(d['sample'][:100000], x, order = o, hue = 'author_anon') """ Explanation: Q: How do authors write differently on their own page than on other pages? End of explanation """ d_gender = get_genders(d['sample']) o = ('unknown: registered', 'male', 'female') x = 'author_gender' compare_groups(d_gender, x, order = o) """ Explanation: Q: What is the effect of the author's gender? End of explanation """ o = ('unknown: registered', 'male', 'female') x = 'recipient_gender' compare_groups(d_gender.query('not own_page'), x, order= o) """ Explanation: Q: What is the effect of the recipient's gender? End of explanation """ o = ('unknown: registered', 'male', 'female') x = 'author_gender' compare_groups(d_gender.query("not own_page and recipient_gender != 'unknown:anon'"), x, order = o, hue = 'recipient_gender') """ Explanation: Q: How does the effect change when you interact author and recipient gender? End of explanation """ thresholds = np.percentile(d['2015']['user_text'].value_counts(), np.arange(0, 100.01,0.5 )) thresholds = sorted(set(thresholds.astype(int))) bins = [] for i in range(len(thresholds)-1): label = '%d-%d' % (thresholds[i], thresholds[i+1]-1) rnge = range(thresholds[i], thresholds[i+1]) bins.append((label, rnge)) def map_count(x): for label, rnge in bins: if x in rnge: return label d_temp = d['2015'].query('not author_anon')\ .groupby('user_text')['pred_aggression_score']\ .agg( {'aggressiveness': np.mean, 'count': len})\ .assign(num_comment_range = lambda x: x['count'].apply(map_count)) o = [e[0] for e in bins] sns.pointplot(x='num_comment_range', y= 'aggressiveness', data= d_temp, order = o) # TODO: extend to attacks, use long term user data, repeat for victims """ Explanation: Q: How does tone depend on the frequency of commenting? Methodology: let the "aggressiveness" of a user be the averge aggression_score of all their comments. Compare aggression scores across groups of users based on how much the post End of explanation """
mne-tools/mne-tools.github.io
0.20/_downloads/075ba1175413b0aa0dc66e721f312729/plot_mixed_norm_inverse.ipynb
bsd-3-clause
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import sample from mne.inverse_sparse import mixed_norm, make_stc_from_dipoles from mne.minimum_norm import make_inverse_operator, apply_inverse from mne.viz import (plot_sparse_source_estimates, plot_dipole_locations, plot_dipole_amplitudes) print(__doc__) data_path = sample.data_path() fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif' cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif' subjects_dir = data_path + '/subjects' # Read noise covariance matrix cov = mne.read_cov(cov_fname) # Handling average file condition = 'Left Auditory' evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0)) evoked.crop(tmin=0, tmax=0.3) # Handling forward solution forward = mne.read_forward_solution(fwd_fname) """ Explanation: Compute sparse inverse solution with mixed norm: MxNE and irMxNE Runs an (ir)MxNE (L1/L2 [1] or L0.5/L2 [2] mixed norm) inverse solver. L0.5/L2 is done with irMxNE which allows for sparser source estimates with less amplitude bias due to the non-convexity of the L0.5/L2 mixed norm penalty. End of explanation """ alpha = 55 # regularization parameter between 0 and 100 (100 is high) loose, depth = 0.2, 0.9 # loose orientation & depth weighting n_mxne_iter = 10 # if > 1 use L0.5/L2 reweighted mixed norm solver # if n_mxne_iter > 1 dSPM weighting can be avoided. # Compute dSPM solution to be used as weights in MxNE inverse_operator = make_inverse_operator(evoked.info, forward, cov, depth=depth, fixed=True, use_cps=True) stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9., method='dSPM') # Compute (ir)MxNE inverse solution with dipole output dipoles, residual = mixed_norm( evoked, forward, cov, alpha, loose=loose, depth=depth, maxit=3000, tol=1e-4, active_set_size=10, debias=True, weights=stc_dspm, weights_min=8., n_mxne_iter=n_mxne_iter, return_residual=True, return_as_dipoles=True) """ Explanation: Run solver End of explanation """ plot_dipole_amplitudes(dipoles) # Plot dipole location of the strongest dipole with MRI slices idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles]) plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample', subjects_dir=subjects_dir, mode='orthoview', idx='amplitude') # Plot dipole locations of all dipoles with MRI slices for dip in dipoles: plot_dipole_locations(dip, forward['mri_head_t'], 'sample', subjects_dir=subjects_dir, mode='orthoview', idx='amplitude') """ Explanation: Plot dipole activations End of explanation """ ylim = dict(eeg=[-10, 10], grad=[-400, 400], mag=[-600, 600]) evoked.pick_types(meg=True, eeg=True, exclude='bads') evoked.plot(ylim=ylim, proj=True, time_unit='s') residual.pick_types(meg=True, eeg=True, exclude='bads') residual.plot(ylim=ylim, proj=True, time_unit='s') """ Explanation: Plot residual End of explanation """ stc = make_stc_from_dipoles(dipoles, forward['src']) """ Explanation: Generate stc from dipoles End of explanation """ solver = "MxNE" if n_mxne_iter == 1 else "irMxNE" plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1), fig_name="%s (cond %s)" % (solver, condition), opacity=0.1) """ Explanation: View in 2D and 3D ("glass" brain like 3D plot) End of explanation """ morph = mne.compute_source_morph(stc, subject_from='sample', subject_to='fsaverage', spacing=None, sparse=True, subjects_dir=subjects_dir) stc_fsaverage = morph.apply(stc) src_fsaverage_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif' src_fsaverage = mne.read_source_spaces(src_fsaverage_fname) plot_sparse_source_estimates(src_fsaverage, stc_fsaverage, bgcolor=(1, 1, 1), fig_name="Morphed %s (cond %s)" % (solver, condition), opacity=0.1) """ Explanation: Morph onto fsaverage brain and view End of explanation """
grokkaine/biopycourse
day2/.ipynb_checkpoints/ML_regression-checkpoint.ipynb
cc0-1.0
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model diabetes = datasets.load_diabetes() #diabetes print(diabetes.data.shape, diabetes.target.shape) print(diabetes.data[:5,:3]) print(diabetes.target) ft = 2 # feature type 0 - age, 1 - sex, 2 - bmi X = diabetes.data[:, np.newaxis] #expand the new #print "Expansion:", diabetes.data.shape, X.shape #print X[0] #print X[0, 0, ft] Xt = X[:, :, ft] #print Xt.shape #print Xt[:5] """ Explanation: Regression problems generally deal with estimating a set of unknown parameters ($\beta$) from a set of known "independent" variables $X$ such that an arbitrary function $f(X,\beta)$ (the model) can approximate a set of "dependent" variables $Y$, or $f(X,\beta)\approx Y$. Many algorithms exists depending on $f$ being linear of non-linear, or the particularities of X and Y datasets. Diabetes dataset Most ML algorithm classes are improved to fit general datasets. Some famous standard datasets exists, such as the diabetes datasets. While many things can be learned from it, it is used especially as an example for simple regression. Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of n = 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline. features: 10 physiological variables (age, sex, weight, blood pressure) dimensions: 442 patients target (response): an indication of disease progression after one year 20-260 Linear regression In the example below, a linear regression model is used to predict diabete. Which feature does better at predicting diabetes? End of explanation """ #Split the data into training/testing sets X_train = Xt[:-20] X_test = Xt[-20:] #Split the targets into training/testing sets y_train = diabetes.target[:-20] y_test = diabetes.target[-20:] regr = linear_model.LinearRegression() regr.fit(X_train, y_train) y_predict = regr.predict(X_test) score = regr.score(X_test, y_test) print 'Betas (regression coefficients): \n', regr.coef_ print("Mean square error: %.2f" % np.mean((y_predict - y_test) ** 2)) print('Variance score: %.2f' % score) plt.scatter(X_test, y_test, color='black') plt.plot(X_test, y_predict, color='blue', linewidth=3) %matplotlib inline import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model diabetes = datasets.load_diabetes() X = diabetes.data #Split the data into training/testing sets X_train = X[:-20] X_test = X[-20:] #Split the targets into training/testing sets y_train = diabetes.target[:-20] y_test = diabetes.target[-20:] regr = linear_model.LinearRegression() regr.fit(X_train, y_train) y_predict = regr.predict(X_test) score = regr.score(X_test, y_test) print('Coefficients: \n', regr.coef_) print("Mean square error: %.2f"% np.mean((y_predict - y_test) ** 2)) print('Variance score: %.2f' % score) """ Explanation: Crossvalidation Crosvalidation is a general validation method that uses a part of the dataset for training (training data) a model while the remaining rest is used to estimate how efective the model is at predicting (test data). This will split X and Y into four groups, training X, Y and testing X, Y. Usually several crossvalidation tests are done by randomly picking training variables and the training set is by convenience at 80% of the whole data. Goodness of fit Residual sum of squares (mean square error), $v = \sum (y_t - \langle y_t \rangle )^2$. Regression sum of squares $u = \sum (y_t - y_p)^2$. R^2, the coefficient of determination, defined as (1 - u/v). Best possible score is 1.0, lower values are worse. Task Modify the script to perform crossvalidation on 100 randomly picked training sets. Design a crosvalidation function of your own and afterwards look for the scikit-learn's own function and use it instead. What is KFold crossvalidation? End of explanation """ from sklearn.cross_decomposition import PLSRegression n = 1000 q = 3 p = 10 X = np.random.normal(size=n * p).reshape((n, p)) B = np.array([[1, 2] + [0] * (p - 2)] * q).T # each Yj = 1*X1 + 2*X2 + noize #B = np.array([[1, 0.5, 2, 4] + [0] * (p - 4)] * q).T Y = np.dot(X, B) + np.random.normal(size=n * q).reshape((n, q)) + 5 pls2 = PLSRegression(n_components=2) pls2.fit(X, Y) print("True B (such that: Y = XB + Err)") print(B) # compare pls2.coefs with B print("Estimated B") print(np.round(pls2.coefs, 1)) Yp = pls2.predict(X) from sklearn.metrics import r2_score print "R2",r2_score(Y, Yp) T, U = pls2.transform(X, Y) #Apply the dimension reduction learned on the training data ## Notice that the first component is usually well correlated with all the columns in X cp = 0# 0 - first component, 1 -second component, etc xc = 0# X matrix columns import matplotlib.pyplot as plt #plt.title(title) plt.plot(T[:, cp], X[:,1], "ob") #plt.plot(T[:, cp], X, "ob") plt.title("T vs X, component "+str(cp+1)) plt.ylabel('matrix') plt.xlabel('latent matrix scores') """ Explanation: Multivariate case, a PLS Regression example So far our regression target was unidimensional. The hardcore regression however involves multiple target vectors, also called multiple regression. One can still use linear regression techniques, with different corrections that take into account for many exceptions, one such being that the X matrix must be full rank in order for the least square optimization (the engine of linear regression) to work. There are also many other techniques, one my favorites being partial least least squares regression (PLS-R), which I choose to exemplify. PLS is also called projection to latent structures, which mignt be more apropriate, since the multiple regression problem $Y = XB + E$ , is solved by projecting the X and Y matrices into latent (hidden) lower dimensional space that is describing them. Just as with least square fitting ,it is difficult to explain how this is done, and it requires knowledge of factor analysis. The end result is a decomposition into a product of score and loading matrices like this: $X = T P^{\top} + E$ $Y = U Q^{\top} + F$ , from which the coeficient matrix B is estimated. Multivariate case, a PLS Regression example So far our regression target was unidimensional. The hardcore regression however involves multiple target vectors, also called multiple regression. One can still use linear regression techniques, with different corrections that take into account for many exceptions, one such being that the X matrix must be full rank in order for the least square optimization (the engine of linear regression) to work. There are also many other techniques, one my favorites being partial least least squares regression (PLS-R), which I choose to exemplify. PLS is also called projection to latent structures, which mignt be more apropriate, since the multiple regression problem $Y = XB + E$ , is solved by projecting the X and Y matrices into latent (hidden) lower dimensional space that is describing them. Just as with least square fitting ,it is difficult to explain how this is done, and it requires knowledge of factor analysis. The end result is a decomposition into a product of score and loading matrices like this: $X = T P^{\top} + E$ $Y = U Q^{\top} + F$ , from which the coeficient matrix B is estimated. End of explanation """
stevenydc/2015lab1
hw0.ipynb
mit
import sys print sys.version """ Explanation: Homework 0 Survey due 4th September, 2015 Submission due 10th September, 2015 Welcome to CS109 / STAT121 / AC209 / E-109 (http://cs109.org/). In this class, we will be using a variety of tools that will require some initial configuration. To ensure everything goes smoothly moving forward, we will setup the majority of those tools in this homework. It is very important that you do this setup as soon as possible. While some of this will likely be dull, doing it now will enable us to do more exciting work in the weeks that follow without getting bogged down in further software configuration. You will also be filling out a mandatory class survey and creating a github and AWS account, which are mandatory as well. Please note that the survey is due on September 4th. The reason is that we need your github account name to set you up for the homework submission system. If you do not submit the survey on time you might not be able to submit the homework in time. This homework will not be graded, however, you must submit it. Submission instructions, along with the github flow for homework, are at the end of this notebook. The practice you will get submitting this homework will be essential for the submission of the forthcoming homework notebooks and your project. Table of Contents Homework 0 Survey due 4th September, 2015 Submission due 10th September, 2015 First Things 1. Create your github account 2. Class Survey 3. Piazza 4. Programming expectations 5. If you do not have a .edu email address Getting and installing Python Installing Anaconda Mac/Linux users Windows Users Troubleshooting Setting up your git environment 1. Installing git Windows specific notes Mac specific notes 2. Optional: Creating ssh keys on your machine 3. Optional: Uploading ssh keys and Authentication 4. Setting global config for git 5. Github tutorial Sign up for AWS 1. Get an AWS account 2. Sign up for AWS educate Hello, Python Python Libraries Installing additional libraries Testing latest libraries Kicking the tires Hello World Hello matplotlib Hello Numpy The Monty Hall Problem The workflow for homeworks and labs getting and working on labs getting and submitting homework First Things I cant stress this enough: Do this setup now! These first things are incredibly important. You must absolutely fill these out to get into the swing of things... 1. Create your github account If you do not have a github account as yet, create it at: https://github.com This step is mandatory. We will need your github username. We are using github for all aspects of this course, including doing and submitting homework collaborating on your project creating your web site To sign up for an account, just go to github and pick a unique username, an email address, and a password. Once you've done that, your github page will be at https://github.com/your-username. Github also provides a student developer package. This is something that might be nice to have, but it is not necessary for the course. Github may take some time to approve your application for the package. Please note that this is optional and you do not have to have the package approved to fill out the survey. 2. Class Survey Next, you must complete the mandatory course survey located here. It should only take a few moments of your time. Once you fill in the survey we will use the github username you provided to sign you up into the cs109-students organization on github. (see https://help.github.com/articles/how-do-i-access-my-organization-account/) It is imperative that you fill out the survey on time as we use the provided information to sign you in: your access to the homework depends on being in this organization. 3. Piazza Go to Piazza and sign up for the class using your Harvard e-mail address. If you do not have a Harvard email address write an email to staff@cs109.org and one of the TFs will sign you up. You will use Piazza as a forum for discussion, to find team members, to arrange appointments, and to ask questions. Piazza should be your primary form of communication with the staff. Use the staff e-mail (staff@cs109.org) only for individual requests, e.g., to excuse yourself from mandatory sections. All announcements, homework, and project descriptions will be posted on Piazza first. Introduction Once you are signed up to the Piazza course forum, introduce yourself to your classmates and course staff with a follow-up post in the introduction thread. Include your name/nickname, your affiliation, why you are taking this course, and tell us something interesting about yourself (e.g., an industry job, an unusual hobby, past travels, or a cool project you did, etc.). Also tell us whether you have experience with data science. 4. Programming expectations All the assignments and labs for this class will use Python and, for the most part, the browser-based IPython notebook format you are currently viewing. Knowledge of Python is not a prerequisite for this course, provided you are comfortable learning on your own as needed. While we have strived to make the programming component of this course straightforward, we will not devote much time to teaching prorgramming or Python syntax. Basically, you should feel comfortable with: How to look up Python syntax on Google and StackOverflow. Basic programming concepts like functions, loops, arrays, dictionaries, strings, and if statements. How to learn new libraries by reading documentation. Asking questions on StackOverflow or Piazza. There are many online tutorials to introduce you to scientific python programming. Here is a course that is very nice. Lectures 1-4 of this course are most relevant to this class. While we will cover some python programming in labs 1 and 2, we expect you to pick it up on the fly. 5. If you do not have a .edu email address Please get one, as you will need it to sign up for AWS educate, and if you want to sign up for the student developer github package you will need it as well. As a DCE student you are eligible for a FAS account and you can sign up here. Getting and installing Python You will be using Python throughout the course, including many popular 3rd party Python libraries for scientific computing. Anaconda is an easy-to-install bundle of Python and most of these libraries. We strongly recommend that you use Anaconda for this course. If you insist on using your own Python setup instead of Anaconda, we will not provide any installation support, and are not responsible for you loosing points on homework assignments in case of inconsistencies. For this course we are using Python 2, not Python 3. Also see: http://docs.continuum.io/anaconda/install The IPython or Jupyter notebook runs in the browser, and works best in Google Chrome or Safari for me. You probably want to use one of these for assignments in this course. Installing Anaconda The Anaconda Python distribution is an easily-installable bundle of Python and many of the libraries used throughout this class. Unless you have a good reason not to, we recommend that you use Anaconda. Mac/Linux users Download the appropriate version of Anaconda Follow the instructions on that page to run the installer Test out the IPython notebook: open a Terminal window, and type ipython notebook. Or use the Anaconda Launcher which might have been deposited on your desktop. A new browser window should pop up. Click New Notebook to create a new notebook file. Trick: give this notebook a unique name, like my-little-rose. Use Spotlight (upper right corner of the mac desktop, looks like a maginifier) to search for this name. In this way, you will know which folder your notebook opens in by default. Windows Users Download the appropriate version of Anaconda Follow the instructions on that page to run the installer. This will typically create a directory at C:\Anaconda Test it out: start the Anaconda launcher, which you can find in C:\Anaconda or, in the Start menu. Start the IPython notebook. A new browser window should open. Click New Notebook, which should open a new page. Trick: give this notebook a unique name, like my-little-rose. Use Explorer (usually start menu on windows desktops) to search for this name. In this way, you will know which folder your notebook opens in by default. If you did not add Anaconda to your path, be sure to use the full path to the python and ipython executables, such as /anaconda/bin/python. If you already have installed Anaconda at some point in the past, you can easily update to the latest Anaconda version by updating conda, then Anaconda as follows: conda update conda conda update anaconda Troubleshooting You must be careful to make sure you are running the Anaconda version of python, since those operating systems come preinstalled with their own versions of python. End of explanation """ x = [10, 20, 30, 40, 50] for item in x: print "Item is... you guessed it! ", item """ Explanation: Problem When you start python, you don't see a line like Python 2.7.5 |Anaconda 1.6.1 (x86_64)|. You are using a Mac or Linux computer Reason You are most likely running a different version of Python, and need to modify your Path (the list of directories your computer looks through to find programs). Solution Find a file like .bash_profile, .bashrc, or .profile. Open the file in a text editor, and add a line at this line at the end: export PATH="$HOME/anaconda/bin:$PATH". Close the file, open a new terminal window, type source ~/.profile (or whatever file you just edited). Type which python -- you should see a path that points to the anaconda directory. If so, running python should load the proper version If this doesn't work (typing which python doesn't point to anaconda), you might be using a different shell. Type echo $SHELL. If this isn't bash, you need to edit a different startup file (for example, if if echo $SHELL gives $csh, you need to edit your .cshrc file. The syntax for this file is slightly different: set PATH = ($HOME/anaconda/bin $PATH) Problem You are running the right version of python (see above item), but are unable to import numpy. Reason You are probably loading a different copy of numpy that is incompatible with Anaconda Solution See the above item to find your .bash_profile, .profile, or .bashrc file. Open it, and add the line unset PYTHONPATH at the end. Close the file, open a new terminal window, type source ~/.profile (or whatever file you just edited), and try again. Problem Under Windows, you receive an error message similar to the following: "'pip' is not recognized as an internal or external command, operable program or batch file." Reason The correct Anaconda paths might not be present in your PATH variable, or Anaconda might not have installed correctly. Solution Ensure the Anaconda directories to your path environment variable ("\Anaconda" and "\Anaconda\Scripts"). See this page for details. If this does not correct the problem, reinstall Anaconda. IF YOU ARE STILL HAVING ISSUES ON THE INSTALL, POST TO PIAZZA. WE'LL HELP YOU THERE. OR ASK IN YOUR SECTION Setting up your git environment 1. Installing git We will be using the command line version of git. On linux, install git using your system package manager (yum, apt-get, etc) On the Mac, if you ever installed Xcode, you should have git installed. Or you might have installed it using homebrew. Either of these are fine as long as the git version is greater than 2.0 Otherwise, on Mac and Windows, go to http://git-scm.com. Accept all defaults in the installation process. On Windows, installing git will also install for you a minimal unix environment with a "bash" shell and terminal window. Voila, your windows computer is transformed into a unixy form. Windows specific notes There will be an installer .exe file you need to click. Accept all the defaults. Here is a screenshot from one of the defaults. It makes sure you will have the "bash" tool talked about earlier. Choose the default line-encoding conversion: Use the terminal emulator they provide, its better than the one shipped with windows. Towards the end, you might see a message like this. It looks scary, but all you need to do is click "Continue" At this point you will be installed. You can bring up "git bash" either from your start menu, or from the right click menu on any folder background. When you do so, a terminal window will open. This terminal is where you will issue further git setup commands, and git commands in general. Get familiar with the terminal. It opens in your home folder, and maps \\ paths on windows to more web/unix like paths with '/'. Try issuing the commands ls, pwd, and cd folder where folder is one of the folders you see when you do a ls. You can do a cd .. to come back up. You can also use the terminal which comes with the ipython notebook. More about that later. Mac specific notes As mentioned earlier, if you ever installed Xcode or the "Command Line Developer tools", you may already have git. Make sure its version 2.0 or higher. (git --version) Or if you use Homebrew, you can install it from there. The current version on homebrew is 2.4.3 You dont need to do anyting more in this section. First click on the .mpkg file that comes when you open the downloaded .dmg file. When I tried to install git on my mac, I got a warning saying my security preferences wouldnt allow it to be installed. So I opened my system preferences and went to "Security". Here you must click "Open Anyway", and the installer will run. The installer puts git as /usr/local/git/bin/git. Thats not a particularly useful spot. Open up Terminal.app.Its usually in /Applications/Utilities. Once the terminal opens up, issue sudo ln -s /usr/local/git/bin/git /usr/local/bin/git. Keep the Terminal application handy in your dock. (You could also download and use iTerm.app, which is a nicer terminal, if you are into terminal geekery). We'll be using the terminal extensively for git. You can also use the terminal which comes with the ipython notebook. More about that later. Try issuing the commands ls, pwd, and cd folder where folder is one of the folders you see when you do a ls. You can do a cd .. to come back up. 2. Optional: Creating ssh keys on your machine This ia an optional step. But it makes things much easier. There are two ways git talks to github: https, which is a web based protocol or over ssh Which one you use is your choice. I recommend ssh, and the github urls in this homework and in labs will be ssh urls. Every time you contact your upstream repository (hosted on github), you need to prove you're you. You can do this with passwords over HTTPS, but it gets old quickly. By providing an ssh public key to github, your ssh-agent will handle all of that for you, and you wont have to put in any passwords. At your terminal, issue the command (skip this if you are a seasoned ssh user and already have keys): ssh-keygen -t rsa It will look like this: Accept the defaults. When it asks for a passphrase for your keys, put in none. (you can put in one if you know how to set up a ssh-agent). This will create two files for you, in your home folder if you accepted the defaults. id_rsa is your PRIVATE key. NEVER NEVER NEVER give that to anyone. id_rsa.pub is your public key. You must supply this to github. 3. Optional: Uploading ssh keys and Authentication To upload an ssh key, log in to github and click on the gear icon in the top right corner (settings). Once you're there, click on "SSH keys" on the left. This page will contain all your ssh keys once you upload any. Click on "add ssh key" in the top right. You should see this box: <img src="github_ssh.png" alt="github ssh" style="width: 500px;"/> The title field should be the name of your computer or some other way to identify this particular ssh key. In the key field, you'll need to copy and paste your public key. Do not paste your private ssh key here. When you hit "Add key", you should see the key name and some hexadecimal characters show up in the list. You're set. Now, whenever you clone a repository using this form: $ git clone git@github.com:rdadolf/ac297r-git-demo.git, you'll be connecting over ssh, and will not be asked for your github password You will need to repeat steps 2 and 3 of the setup for each computer you wish to use with github. 4. Setting global config for git Again, from the terminal, issue the command git config --global user.name "YOUR NAME" This sets up a name for you. Then do git config --global user.email "YOUR EMAIL ADDRESS" Use the SAME email address you used in setting up your github account. These commands set up your global configuration. On my Mac, these are stored in the text file .gitconfig in my home folder. 5. Github tutorial Read our git and github tutorial from Lab 1. Then come back here. If you have any issues or questions: Ask us! On Piazza or in Sections! Sign up for AWS For the course you need to sign up for Amazon Web Services (AWS). The sign up process has two steps: Get an AWS account Sign up for AWS educate The AWS account will enable you to access Amazon's webservices. The AWS educate sign up will provide you with $100 worth of free credits. 1. Get an AWS account Note: You can skip this step if you already have an account. Go to this webpage Click on the yellow box in the upper right corner saying "Create an AWS account" Follow the normal instructions and fill in all necessary information to create your account. Once you have an account you need your account ID. The account ID is a 12 digit number. Please follow this description to find your ID in the Support menu of your AWS console. 2. Sign up for AWS educate Note: You will need your 12 digit AWS account ID for this step. Go to this webpage Click on the right on the button saying "Apply for AWS Educate for Students" Confirm that you are a student Fill out the form Note that that you provide should come from your institution, which means it should end in .edu It might take a few days for your request to be approved. Once again, ping us if you need help! Hello, Python The IPython/Jupyter notebook is an application to build interactive computational notebooks. You'll be using them to complete labs and homework. Once you've set up Python, please download this page, and open it with IPython by typing ipython notebook &lt;name_of_downloaded_file&gt; You can also open the notebook in any folder by cding to the folder in the terminal, and typing ipython notebook . in that folder. The anaconda install also probably dropped a launcher on your desktop. You can use the launcher, and select "ipython notebbok" or "jupyter notebook" from there. In this case you will need to find out which folder you are running in. It loolks like this for me: Notice that you can use the user interface to create new folders and text files, and even open new terminals, all of which might come useful to you. To create a new notebook, you can use "Python 2" under notebooks. You may not have the other choices available (I have julia for example, which is another language that uses the same notebook interface). For the rest of the assignment, use your local copy of this page, running on IPython. Notebooks are composed of many "cells", which can contain text (like this one), or code (like the one below). Double click on the cell below, and evaluate it by clicking the "play" button above, for by hitting shift + enter End of explanation """ !pip install BeautifulSoup seaborn pyquery """ Explanation: Python Libraries Installing additional libraries Anaconda includes most of the libraries we will use in this course, but you will need to install a few extra ones for the beginning of this course: BeautifulSoup Seaborn PyQuery The recommended way to install these packages is to run !pip install BeautifulSoup seaborn pyquery in a code cell in the ipython notebook you just created. On windows, you might want to run pip install BeautifulSoup seaborn pyquery on the git-bash.exe terminal (note, the exclamation goes away). If this doesn't work, you can download the source code, and run python setup.py install from the source code directory. On Unix machines(Mac or Linux), either of these commands may require sudo (i.e. sudo pip install... or sudo python) End of explanation """ #IPython is what you are using now to run the notebook import IPython print "IPython version: %6.6s (need at least 3.0.0)" % IPython.__version__ # Numpy is a library for working with Arrays import numpy as np print "Numpy version: %6.6s (need at least 1.9.1)" % np.__version__ # SciPy implements many different numerical algorithms import scipy as sp print "SciPy version: %6.6s (need at least 0.15.1)" % sp.__version__ # Pandas makes working with data tables easier import pandas as pd print "Pandas version: %6.6s (need at least 0.16.2)" % pd.__version__ # Module for plotting import matplotlib print "Mapltolib version: %6.6s (need at least 1.4.1)" % matplotlib.__version__ # SciKit Learn implements several Machine Learning algorithms import sklearn print "Scikit-Learn version: %6.6s (need at least 0.16.1)" % sklearn.__version__ # Requests is a library for getting data from the Web import requests print "requests version: %6.6s (need at least 2.0.0)" % requests.__version__ #BeautifulSoup is a library to parse HTML and XML documents import bs4 print "BeautifulSoup version:%6.6s (need at least 4.4)" % bs4.__version__ import pyquery print "Loaded PyQuery" """ Explanation: If you've successfully completed the above install, all of the following statements should run. Testing latest libraries End of explanation """ # The %... is an iPython thing, and is not part of the Python language. # In this case we're just telling the plotting library to draw things on # the notebook, instead of on a separate window. %matplotlib inline #this line above prepares IPython notebook for working with matplotlib # See all the "as ..." contructs? They're just aliasing the package names. # That way we can call methods like plt.plot() instead of matplotlib.pyplot.plot(). import numpy as np # imports a fast numerical programming library import scipy as sp #imports stats functions, amongst other things import matplotlib as mpl # this actually imports matplotlib import matplotlib.cm as cm #allows us easy access to colormaps import matplotlib.pyplot as plt #sets up plotting under plt import pandas as pd #lets us handle data as dataframes #sets up pandas table display pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) import seaborn as sns #sets up styles and gives us more plotting options """ Explanation: If any of these libraries are missing or out of date, you will need to install them and restart IPython. Kicking the tires Lets try some things, starting from very simple, to more complex. Hello World The following is the incantation we like to put at the beginning of every notebook. It loads most of the stuff we will regularly use. End of explanation """ x = np.linspace(0, 10, 30) #array of 30 points from 0 to 10 y = np.sin(x) z = y + np.random.normal(size=30) * .2 plt.plot(x, y, 'o-', label='A sine wave') plt.plot(x, z, '--', label='Noisy sine') plt.legend(loc = 'lower right') plt.xlabel("X axis") plt.ylabel("Y axis") """ Explanation: Hello matplotlib The notebook integrates nicely with Matplotlib, the primary plotting package for python. This should embed a figure of a sine wave: End of explanation """ print "Make a 3 row x 4 column array of random numbers" x = np.random.random((3, 4)) print x print print "Add 1 to every element" x = x + 1 print x print print "Get the element at row 1, column 2" print x[1, 2] print # The colon syntax is called "slicing" the array. print "Get the first row" print x[0, :] print print "Get every 2nd column of the first row" print x[0, ::2] print """ Explanation: If that last cell complained about the %matplotlib line, you need to update IPython to v1.0, and restart the notebook. See the installation page Hello Numpy The Numpy array processing library is the basis of nearly all numerical computing in Python. Here's a 30 second crash course. For more details, consult Chapter 4 of Python for Data Analysis, or the Numpy User's Guide End of explanation """ print "Max is ", x.max() print "Min is ", x.min() print "Mean is ", x.mean() """ Explanation: Print the maximum, minimum, and mean of the array. This does not require writing a loop. In the code cell below, type x.m&lt;TAB&gt;, to find built-in operations for common array statistics like this End of explanation """ print x.max(axis=1) """ Explanation: Call the x.max function again, but use the axis keyword to print the maximum of each row in x. End of explanation """ x = np.random.binomial(500, .5) print "number of heads:", x """ Explanation: Here's a way to quickly simulate 500 coin "fair" coin tosses (where the probabily of getting Heads is 50%, or 0.5) End of explanation """ # 3 ways to run the simulations # loop heads = [] for i in range(500): heads.append(np.random.binomial(500, .5)) # "list comprehension" heads = [np.random.binomial(500, .5) for i in range(500)] print len(heads) # pure numpy heads = np.random.binomial(500, .5, size=500) histogram = plt.hist(heads, bins=10) heads.shape """ Explanation: Repeat this simulation 500 times, and use the plt.hist() function to plot a histogram of the number of Heads (1s) in each simulation End of explanation """ """ Function -------- simulate_prizedoor Generate a random array of 0s, 1s, and 2s, representing hiding a prize between door 0, door 1, and door 2 Parameters ---------- nsim : int The number of simulations to run Returns ------- sims : array Random array of 0s, 1s, and 2s Example ------- >>> print simulate_prizedoor(3) array([0, 0, 2]) """ def simulate_prizedoor(nsim): return np.random.randint(0, 3, (nsim)) """ Explanation: The Monty Hall Problem Here's a fun and perhaps surprising statistical riddle, and a good way to get some practice writing python functions In a gameshow, contestants try to guess which of 3 closed doors contain a cash prize (goats are behind the other two doors). Of course, the odds of choosing the correct door are 1 in 3. As a twist, the host of the show occasionally opens a door after a contestant makes his or her choice. This door is always one of the two the contestant did not pick, and is also always one of the goat doors (note that it is always possible to do this, since there are two goat doors). At this point, the contestant has the option of keeping his or her original choice, or swtiching to the other unopened door. The question is: is there any benefit to switching doors? The answer surprises many people who haven't heard the question before. We can answer the problem by running simulations in Python. We'll do it in several parts. First, write a function called simulate_prizedoor. This function will simulate the location of the prize in many games -- see the detailed specification below: End of explanation """ """ Function -------- simulate_guess Return any strategy for guessing which door a prize is behind. This could be a random strategy, one that always guesses 2, whatever. Parameters ---------- nsim : int The number of simulations to generate guesses for Returns ------- guesses : array An array of guesses. Each guess is a 0, 1, or 2 Example ------- >>> print simulate_guess(5) array([0, 0, 0, 0, 0]) """ def simulate_guess(nsim): return np.zeros(nsim, dtype=np.int) """ Explanation: Next, write a function that simulates the contestant's guesses for nsim simulations. Call this function simulate_guess. The specs: End of explanation """ """ Function -------- goat_door Simulate the opening of a "goat door" that doesn't contain the prize, and is different from the contestants guess Parameters ---------- prizedoors : array The door that the prize is behind in each simulation guesses : array THe door that the contestant guessed in each simulation Returns ------- goats : array The goat door that is opened for each simulation. Each item is 0, 1, or 2, and is different from both prizedoors and guesses Examples -------- >>> print goat_door(np.array([0, 1, 2]), np.array([1, 1, 1])) >>> array([2, 2, 0]) """ def goat_door(prizedoors, guesses): #strategy: generate random answers, and #keep updating until they satisfy the rule #that they aren't a prizedoor or a guess result = np.random.randint(0, 3, prizedoors.size) while True: bad = (result == prizedoors) | (result == guesses) if not bad.any(): return result result[bad] = np.random.randint(0, 3, bad.sum()) """ Explanation: Next, write a function, goat_door, to simulate randomly revealing one of the goat doors that a contestant didn't pick. End of explanation """ """ Function -------- switch_guess The strategy that always switches a guess after the goat door is opened Parameters ---------- guesses : array Array of original guesses, for each simulation goatdoors : array Array of revealed goat doors for each simulation Returns ------- The new door after switching. Should be different from both guesses and goatdoors Examples -------- >>> print switch_guess(np.array([0, 1, 2]), np.array([1, 2, 1])) >>> array([2, 0, 0]) """ def switch_guess(guesses, goatdoors): result = np.zeros(guesses.size) switch = {(0, 1): 2, (0, 2): 1, (1, 0): 2, (1, 2): 1, (2, 0): 1, (2, 1): 0} for i in [0, 1, 2]: for j in [0, 1, 2]: mask = (guesses == i) & (goatdoors == j) if not mask.any(): continue result = np.where(mask, np.ones_like(result) * switch[(i, j)], result) return result """ Explanation: Write a function, switch_guess, that represents the strategy of always switching a guess after the goat door is opened. End of explanation """ """ Function -------- win_percentage Calculate the percent of times that a simulation of guesses is correct Parameters ----------- guesses : array Guesses for each simulation prizedoors : array Location of prize for each simulation Returns -------- percentage : number between 0 and 100 The win percentage Examples --------- >>> print win_percentage(np.array([0, 1, 2]), np.array([0, 0, 0])) 33.333 """ def win_percentage(guesses, prizedoors): return 100 * (guesses == prizedoors).mean() """ Explanation: Last function: write a win_percentage function that takes an array of guesses and prizedoors, and returns the percent of correct guesses End of explanation """ nsim = 10000 #keep guesses print "Win percentage when keeping original door" print win_percentage(simulate_prizedoor(nsim), simulate_guess(nsim)) #switch pd = simulate_prizedoor(nsim) guess = simulate_guess(nsim) goats = goat_door(pd, guess) guess = switch_guess(guess, goats) print "Win percentage when switching doors" print win_percentage(pd, guess).mean() """ Explanation: Now, put it together. Simulate 10000 games where contestant keeps his original guess, and 10000 games where the contestant switches his door after a goat door is revealed. Compute the percentage of time the contestant wins under either strategy. Is one strategy better than the other? End of explanation """
charlesreid1/rejoyce
Lestrygonians Part 4.ipynb
mit
%matplotlib inline import nltk, re, io import numpy as np import pandas as pd import seaborn as sns from matplotlib.pylab import * txtfile = 'txt/08lestrygonians.txt' from nltk.tokenize import RegexpTokenizer tokenizer = RegexpTokenizer(r'\w+') with io.open(txtfile) as f: tokens = tokenizer.tokenize(f.read()) print tokens[1000:1020] print tokenizer.tokenize("can't keep a contraction together!") """ Explanation: Analyzing Ulysses with NLTK: Lestrygonians (Ch. 8) Part IV: Wordplay <br /> <br /> <br /> Table of Contents Introduction Tokenizing Without Punctuation Method 1: TokenSearcher Object Method 2: Bigram Splitting Method Functionalizing Bigram Search Methods <br /> <br /> <br /> <a name="intro"></a> Introduction In this notebook we'll analyze some of Joyce's wordplay in Ulysses, using more complicated regular expressions. <a name="tokenizing_wo_punctuation"></a> Tokenizing Without Punctuation To tokenize the chapter and throw out the punctuation, we can use the regular expression \w+. Note that this will split up contractions like "can't" into ["can","t"]. End of explanation """ tsearch = nltk.TokenSearcher(tokens) s_s_ = tsearch.findall(r'<s.*> <.*> <s.*> <.*> <.*>') print len(s_s_) for s in s_s_: print ' '.join(s) """ Explanation: <a name="tokensearcher"></a> Method 1: TokenSearcher Object The first method for searching for regular expressions in a set of tokens is the TokenSearcher object. This can be fed a regular expression that searches across tokens, and it will search through each token. This provides a big advantage: we don't have to manually break all of our tokens into n-grams ourselves, we can just let the TokenSearcher do the hard work. Here's an example of how to create and call that object: End of explanation """ def printlist(the_list): for item in the_list: print item alliteration = [] for (i,j) in nltk.bigrams(tokens): if i[:1]==j[:1]: alliteration.append( ' '.join([i,j]) ) print "Found",len(alliteration),"pairs of words starting with the same letter:" printlist(alliteration[:10]) printlist(alliteration[-10:]) lolly = [] for (i,j) in nltk.bigrams(tokens): if len( re.findall('ll',i) )>0: if len( re.findall('l',j) )>0: lolly.append( ' '.join([i,j]) ) elif len( re.findall('ll',j) )>0: if len( re.findall('l',i) )>0: lolly.append(' '.join([i,j]) ) print "Found",len(lolly),"pairs of words, one containing 'll' and the other containing 'l':" print "First 25:" printlist(lolly[:25]) lolly = [] for (i,j) in nltk.bigrams(tokens): if len( re.findall('rr',i) )>0: if len( re.findall('r',j) )>0: lolly.append( ' '.join([i,j]) ) elif len( re.findall('rr',j) )>0: if len( re.findall('r',i) )>0: lolly.append(' '.join([i,j]) ) print "Found",len(lolly),"pairs of words, one containing 'r' and the other containing 'r':" printlist(lolly) """ Explanation: <a name="bigram_splitting"></a> Method 2: Bigram Splitting Method Another way of searching for patterns, one that may be needed if we want to use criteria that would be hard to implement with a regular expression (such as finding two words that are the same length next to each other), is to assemble all of the tokens into bigrams. Suppose we are looking for two words that start with the same letter. We can do this by iterating through a set of bigrams (we'll use a built-in NLTK object to generate bigrams), and apply our search criteria to the first and second words independently. To create bigrams, we'll use the nltk.bigrams() method, feeding it a list of tokens. When we do this, we can see there's a lot of alliteration in this chapter. End of explanation """ def double_letter_alliteration(c,tokens): """ This function finds all occurrences of double-letter and single-letter occurrences of the character c. This function is called by all_double_letter_alliteration(). """ allall = [] for (i,j) in nltk.bigrams(tokens): if len( re.findall(c+c,i) )>0: if len( re.findall(c,j) )>0: lolly.append( ' '.join([i,j]) ) elif len( re.findall(c+c,j) )>0: if len( re.findall(c,i) )>0: allall.append(' '.join([i,j]) ) return allall """ Explanation: <a name="functionalizing"></a> Functionalizing Bigram Searches We can functionalize the search for patterns with a single and double character shared, i.e., dropping currants (the letter r). End of explanation """ printlist(double_letter_alliteration('r',tokens)) printlist(double_letter_alliteration('o',tokens)) import string def all_double_letter_alliteration(tokens): all_all = [] alphabet = list(string.ascii_lowercase) for aleph in alphabet: results = double_letter_alliteration(aleph,tokens) print "Matching",aleph,":",len(results) all_all += results return all_all allall = all_double_letter_alliteration(tokens) print len(allall) """ Explanation: Now we can use this function to search for the single-double letter pattern individually, or we can define a function that will loop over all 26 letters to find all matching patterns. End of explanation """ double(len(allall))/len(tokens) """ Explanation: That's a mouthful of alliteration! We can compare the number of words that matched this (one, single) search for examples of alliteration to the total number of words in the chapter: End of explanation """ print len(allall) printlist(allall[:20]) """ Explanation: Holy cow - 2.6% of the chapter is just this one alliteration pattern, of having two neighbor words: one with a double letter, and one with a single letter. End of explanation """ def match_double(aleph,tokens): matches = [] for (i,j) in nltk.bigrams(tokens): if len( re.findall(aleph+aleph,i) )>0: if len( re.findall(aleph+aleph,j) )>0: matches.append(' '.join([i,j])) return matches def double_double(tokens): dd = [] alphabet = list(string.ascii_lowercase) for aleph in alphabet: results = match_double(aleph, tokens) print "Matching %s%s: %d"%(aleph,aleph,len(results)) dd += results return dd print "Neighbor words with double letters:" dd = double_double(tokens) printlist(dd) """ Explanation: Let's look at the pattern taken one step further: we'll look for double letters in neighbor words. End of explanation """ with io.open(txtfile) as f: sentences = nltk.sent_tokenize(f.read()) print len(sentences) acronyms = [] for s in sentences: s2 = re.sub('\n',' ',s) words = s2.split(" ") acronym = ''.join(w[0] for w in words if w<>u'') acronyms.append(acronym) print len(acronyms) print "-"*20 printlist(acronyms[:10]) print "-"*20 printlist(sentences[:10]) # <-- contains newlines, but removed to create acronyms from nltk.corpus import words acronyms[101:111] """ Explanation: <a name="acronyms"></a> Acronyms Let's take a look at some acronyms. For this application, it might be better to tokenize by sentence, and extract acronyms for sentences. End of explanation """
UWSEDS/LectureNotes
Spring2019/02_Procedural_Python/Procedural Programming.ipynb
bsd-2-clause
instructors = ['Dave', 'Joe', 'Bernease', 'Dorkus the Clown'] instructors """ Explanation: Procedural programming in python Topics Flow control, part 1 If For range() function Some hacky hack time Exercises <hr> <hr> Review of Data Types | type | description | |------|------------| | primitive | int, float, string, bool | | tuple | An immutable collection of ordered objects | | list | A mutable collection of ordered objects | | dictionary | A mutable collection of named objects | <hr> Flow control Flow control refers how to programs do loops, conditional execution, and order of functional operations. Let's start with conditionals, or the venerable if statement. Let's start with a simple list of instructors for these classes. End of explanation """ if 'Dorkus the Clown' in instructors: print('#fakeinstructor') """ Explanation: If If statements can be use to execute some lines or block of code if a particular condition is satisfied. E.g. Let's print something based on the entries in the list. End of explanation """ if 'Dorkus the Clown' in instructors: print('There are fake names for class instructors in your list!') else: print("Nothing to see here") """ Explanation: Usually we want conditional logic on both sides of a binary condition, e.g. some action when True and some when False End of explanation """ if 'Joe' in instructors: print("Congratulations! Joe is teaching, your class won't stink!") else: pass """ Explanation: There is a special do nothing word: pass that skips over some arm of a conditional, e.g. End of explanation """ if True is False: print("I'm so confused") else: print("Everything is right with the world") """ Explanation: Note: what have you noticed in this session about quotes? What is the difference between ' and "? Another simple example: End of explanation """ my_favorite = 'pie' if my_favorite is 'cake': print("He likes cake! I'll start making a double chocolate velvet cake right now!") elif my_favorite is 'pie': print("He likes pie! I'll start making a cherry pie right now!") else: print("He likes " + my_favorite + ". I don't know how to make that.") """ Explanation: It is always good practice to handle all cases explicity. Conditional fall through is a common source of bugs. Sometimes we wish to test multiple conditions. Use if, elif, and else. End of explanation """ my_favorite = 'pie' if my_favorite is 'cake' or my_favorite is 'pie': print(my_favorite + " : I have a recipe for that!") else: print("Ew! Who eats that?") """ Explanation: Conditionals can take and and or and not. E.g. End of explanation """ for instructor in instructors: print(instructor) """ Explanation: For For loops are the standard loop, though while is also common. For has the general form: for items in list: do stuff For loops and collections like tuples, lists and dictionaries are natural friends. End of explanation """ for instructor in instructors: if instructor.endswith('Clown'): print(instructor + " doesn't sound like a real instructor name!") else: print(instructor + " is so smart... all those gooey brains!") """ Explanation: You can combine loops and conditionals: End of explanation """ range(3) """ Explanation: Dictionaries can use the keys method for iterating. range() Since for operates over lists, it is common to want to do something like: NOTE: C-like for (i = 0; i &lt; 3; ++i) { print(i); } The Python equivalent is: for i in [0, 1, 2]: do something with i What happens when the range you want to sample is big, e.g. NOTE: C-like for (i = 0; i &lt; 1000000000; ++i) { print(i); } That would be a real pain in the rear to have to write out the entire list from 1 to 1000000000. Enter, the range() function. E.g. range(3) is [0, 1, 2] End of explanation """ list(range(3)) """ Explanation: Notice that Python (in the newest versions, e.g. 3+) has an object type that is a range. This saves memory and speeds up calculations vs. an explicit representation of a range as a list - but it can be automagically converted to a list on the fly by Python. To show the contents as a list we can use the type case like with the tuple above. Sometimes, in older Python docs, you will see xrange. This used the range object back in Python 2 and range returned an actual list. Beware of this! End of explanation """ for index in range(3): instructor = instructors[index] if instructor.endswith('Clown'): print(instructor + " doesn't sound like a real instructor name!") else: print(instructor + " is so smart... all those gooey brains!") """ Explanation: Remember earlier with slicing, the syntax :3 meant [0, 1, 2]? Well, the same upper bound philosophy applies here. End of explanation """ for index in range(len(instructors)): instructor = instructors[index] if instructor.endswith('Clown'): print(instructor + " doesn't sound like a real instructor name!") else: print(instructor + " is so smart... all those gooey brains!") """ Explanation: This would probably be better written as End of explanation """ sum = 0 for i in range(10): sum += i print(sum) """ Explanation: But in all, it isn't very Pythonesque to use indexes like that (unless you have another reason in the loop) and you would opt instead for the instructor in instructors form. More often, you are doing something with the numbers that requires them to be integers, e.g. math. End of explanation """ for i in range(1, 4): for j in range(1, 4): print('%d * %d = %d' % (i, j, i*j)) # Note string formatting here, %d means an integer """ Explanation: For loops can be nested Note: for more on formatting strings, see: https://pyformat.info End of explanation """ for i in range(10): if i == 4: break i """ Explanation: You can exit loops early if a condition is met: End of explanation """ sum = 0 for i in range(10): if (i == 5): continue else: sum += i print(sum) """ Explanation: You can skip stuff in a loop with continue End of explanation """ sum = 0 for i in range(10): sum += i else: print('final i = %d, and sum = %d' % (i, sum)) """ Explanation: There is a unique language feature call for...else End of explanation """ my_string = "DIRECT" for c in my_string: print(c) """ Explanation: You can iterate over letters in a string End of explanation """ def print_string(str): """This prints out a string passed as the parameter.""" print(str) return """ Explanation: <hr> Exercise Objective: Replace the bash magic bits for downloading the Pronto data and uncompressing it with Python code. Since the download is big, check if the zip file exists first before downloading it again. Then load it into a pandas dataframe. Notes: * The os package has tools for checking if a file exists: os.path.exists import os filename = 'pronto.csv' if os.path.exists(filename): print("wahoo!") * Use the requests package to get the file given a url (got this from the requests docs) import requests url = 'https://s3.amazonaws.com/pronto-data/open_data_year_two.zip' req = requests.get(url) assert req.status_code == 200 # if the download failed, this line will generate an error with open(filename, 'wb') as f: f.write(req.content) * Use the zipfile package to decompress the file while reading it into pandas import pandas as pd import zipfile csv_filename = '2016_trip_data.csv' zf = zipfile.ZipFile(filename) data = pd.read_csv(zf.open(csv_filename)) Now, use your code from above for the following URLs and filenames | URL | filename | csv_filename | |-----|----------|--------------| | https://github.com/UWSEDS/LectureNotes/blob/master/open_data_year_two_set1.zip?raw=true | open_data_year_two_set1.zip | 2016_trip_data_set1.csv | | https://github.com/UWSEDS/LectureNotes/blob/master/open_data_year_two_set2.zip?raw=true | open_data_year_two_set2.zip | 2016_trip_data_set2.csv | | https://github.com/UWSEDS/LectureNotes/blob/master/open_data_year_two_set3.zip?raw=true | open_data_year_two_set3.zip | 2016_trip_data_set3.csv | What pieces of the data structures and flow control that we talked about earlier can you use? <hr> Functions For loops let you repeat some code for every item in a list. Functions are similar in that they run the same lines of code for new values of some variable. They are different in that functions are not limited to looping over items. Functions are a critical part of writing easy to read, reusable code. Create a function like: def function_name (parameters): """ optional docstring """ function expressions return [variable] Note: Sometimes I use the word argument in place of parameter. Here is a simple example. It prints a string that was passed in and returns nothing. End of explanation """ print_string("Dave is awesome!") """ Explanation: To call the function, use: print_string("Dave is awesome!") Note: The function has to be defined before you can call it! End of explanation """ def change_list(my_list): """This changes a passed list into this function""" my_list.append('four'); print('list inside the function: ', my_list) return my_list = [1, 2, 3]; print('list before the function: ', my_list) change_list(my_list); print('list after the function: ', my_list) """ Explanation: If you don't provide an argument or too many, you get an error. Parameters (or arguments) in Python are all passed by reference. This means that if you modify the parameters in the function, they are modified outside of the function. See the following example: ``` def change_list(my_list): """This changes a passed list into this function""" my_list.append('four'); print('list inside the function: ', my_list) return my_list = [1, 2, 3]; print('list before the function: ', my_list) change_list(my_list); print('list after the function: ', my_list) ``` End of explanation """ def print_name(first, last='the Clown'): print('Your name is %s %s' % (first, last)) return """ Explanation: Variables have scope: global and local In a function, new variables that you create are not saved when the function returns - these are local variables. Variables defined outside of the function can be accessed but not changed - these are global variables, Note there is a way to do this with the global keyword. Generally, the use of global variables is not encouraged, instead use parameters. ``` my_global_1 = 'bad idea' my_global_2 = 'another bad one' my_global_3 = 'better idea' def my_function(): print(my_global) my_global_2 = 'broke your global, man!' global my_global_3 my_global_3 = 'still a better idea' return my_function() print(my_global_2) print(my_global_3) ``` In general, you want to use parameters to provide data to a function and return a result with the return. E.g. def sum(x, y): my_sum = x + y return my_sum If you are going to return multiple objects, what data structure that we talked about can be used? Give and example below. Parameters have four different types: | type | behavior | |------|----------| | required | positional, must be present or error, e.g. my_func(first_name, last_name) | | keyword | position independent, e.g. my_func(first_name, last_name) can be called my_func(first_name='Dave', last_name='Beck') or my_func(last_name='Beck', first_name='Dave') | | default | keyword params that default to a value if not provided | End of explanation """ def print_name_age(first, last, age): print_name(first, last) print('Your age is %d' % (age)) if age > 35: print('You are really old.') return print_name_age(age=40, last='Beck', first='Dave') """ Explanation: Play around with the above function. Functions can contain any code that you put anywhere else including: * if...elif...else * for...else * while * other function calls End of explanation """
owlas/magpy
docs/source/notebooks/two-particle-equilibrium.ipynb
bsd-3-clause
import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d from tqdm import tqdm_notebook #import tqdm import magpy as mp %matplotlib inline """ Explanation: Two particle equilibrium If you haven't read the One particle equilibrium notebook yet, go and read it now. In the previous notebook we showed that we can use Magpy to compute the correct thermal equilibrium for a single particle. However, we also need to check that the interactions are correctly implemented by simulating the thermal equilibrium of multiple interacting particles. In this notebook we'll simulate an ensemble of two particle systems with Magpy. Instead of computing the distribution analytically, we will use the Metropolis Markov-Chain Monte-Carlo technique to generate the correct equilibrium. Acknowledgements Many thanks to Jonathon Waters for the terse python implementation of the Metropolis algorithm! Problem setup In this example the system comprises two identical particles separated by a distance $R$. The particles have their anisotropy axes in the same direction. We are interested in the following four variables: the angle between the particle's moments and the anisotropy axis $\theta_1,\theta_2$ and the rotational (azimuth) angle of the particles around the anisotropy axis $\phi_1,\phi_2$ Modules End of explanation """ def e_anisotropy(moments, anisotropy_axes, V, K, particle_id): cos_t = np.sum(moments[particle_id, :]*anisotropy_axes[particle_id, :]) return -K*V*cos_t**2 """ Explanation: Metropolis MCMC Energy terms Anisotropy The energy contribution from the anisotropy of a single particle $i$ is: $$E^a_i=V_i\vec{K}_i\cdot\vec{m}_i$$ End of explanation """ def e_dipole(moments, positions, Ms, V, particle_id): mu_0 = mp.core.get_mu0() mask = np.ones(moments.shape[0], dtype=bool) mask[particle_id] = False rs = positions[mask]-positions[particle_id, :] mod_rs = np.linalg.norm(rs, axis=1) rs[:, 0] = rs[:, 0] / mod_rs rs[:, 1] = rs[:, 1] / mod_rs rs[:, 2] = rs[:, 2] / mod_rs m1_m2 = np.sum(moments[particle_id, :]*moments[mask], axis=1) m1_r = np.sum(moments[particle_id, :]*rs, axis=1) m2_r = np.sum(moments[mask]*rs, axis=1) numer = (V**2)*(Ms**2)*mu_0*(3*m1_r*m2_r - m1_m2) denom = 4*np.pi*np.power(mod_rs, 3) return -np.sum(numer/denom) """ Explanation: Dipolar interaction energy The energy contribution from $N$ particles $j=1,2,\dots,N$ interacting with a single particle $i$: $$E^d_{i} = \sum_j\frac{V_i^2 M_s^2 \mu_0 \left(3 (\vec{m}i\cdot\vec{r}{ij})(\vec{m}j\cdot\vec{r}{ij}) - \vec{m}_i\cdot\vec{m}_j\right)}{4\pi\left|r\right|^3}$$ End of explanation """ def e_total(moments, positions, anisotropy_axes, Ms, V, K, particle_id): return ( e_dipole(moments, positions, Ms, V, particle_id) + e_anisotropy(moments, anisotropy_axes, V, K, particle_id) ) """ Explanation: Total energy The total energy contribution from a single particle in the ensemble is: $$E_i=E^a_i+E^d_i$$ End of explanation """ def sphere_point(): theta = 2*np.pi*np.random.rand() phi = np.arccos(1-2*np.random.rand()) return np.array([np.sin(phi)*np.cos(theta), np.sin(phi)*np.sin(theta), np.cos(phi)]) def MH(positions, ani_axis, spins, Neq, Nsamps, SampRate, Ms, V, K, T, seed=42): np.random.seed(seed) k_b = mp.core.get_KB() test = np.copy(spins) Ntot = Neq+Nsamps*SampRate Out = np.zeros([spins.shape[0], spins.shape[1], Nsamps]) ns = 0 for n in tqdm_notebook(range(Ntot)): # pick a random spin i = int(np.random.rand(1)*positions.shape[0]) # pick a random dir test[i, :] = sphere_point() dE = e_total(test, positions, ani_axis, Ms, V, K, i) - \ e_total(moments, positions, ani_axis, Ms, V, K, i) if(np.random.rand(1) < np.exp(-dE/(k_b*T))): spins[i, :] = test[i, :] else: test[i, :] = spins[i, :] if (n >= Neq and (n-Neq)%SampRate == 0): Out[:, :, ns] = np.copy(spins) ns += 1 return Out """ Explanation: The Monte-Carlo algorithm Initialise each spin in the system Randomly choose a particle in the system and change it's orientation Compute $\Delta E$ the change in total energy arising from changing the particle orienation if $\Delta E<0$ then we accept the new state and store it $\Delta E>0$ we accept the new state and store it with probability $p=e^{\Delta E/(K_BT)}$ otherwise we reject the new state Return to 2 until desired number of samples Once we run this loop many times, we'll have a list of accepted samples of the system state. The distribution of this ensemble of states is guaranteed to converge to the true distribution. Monte-Carlo is much faster than numerical integration methods when we have many particles. End of explanation """ N = 2 # Two particles T = 330 # temperature K = 1e5 # anisotropy strength R = 9e-9 # distance between two particles r = 7e-9 # radius of the particles V = 4./3 * np.pi * r**3 # volume of particle Ms = 4e5 # saturation magnetisation # particle 1 particle 2 positions = np.array([[0., 0., 0.], [0., 0., R]]) moments = np.array([sphere_point(), sphere_point()]) anisotropy_axes = np.array([[0., 0., 1.], [0., 0., 1.]]) """ Explanation: Parameter set up Now we set the parameters for the two particle system. Both particles are identical and have their anisotropy axes aligned with the $z$ direction. End of explanation """ output = MH(positions, anisotropy_axes, moments, 100000, 600000, 20, Ms, V, K, T, 0) thetas = np.arccos(output[:, 2, :]) plt.hist(thetas[0], bins=50, normed=True) plt.title('Magnetisation angle histogram (MCMC)') plt.xlabel('Magnetisation angle $\\theta$ rads') plt.ylabel('Probability $p(\\theta)$'); """ Explanation: Run the MCMC sampler! This will take some time End of explanation """ # additionally we must specify damping alpha = 0.1 # We build a model of the two particles base_model = mp.Model( anisotropy=[K,K], anisotropy_axis=anisotropy_axes, damping=alpha, location=positions, magnetisation=Ms, magnetisation_direction=moments, radius=[r, r], temperature=T ) # Create an ensemble of 50,000 identical models ensemble = mp.EnsembleModel(50000, base_model) """ Explanation: Magpy - Dynamical Simulation We now use Magpy to simulate a large ensemble of the identical two-particle system. Once the ensemble has reached a stationary distribution, we determine the distribution of magnetisation angles over the ensemble. We expect this distribution to match the equilibrium distribution determined by the MCMC sampler. Define the Magpy model End of explanation """ res = ensemble.simulate(end_time=1e-9, time_step=1e-12, max_samples=500, random_state=1002, n_jobs=-1, implicit_solve=True, interactions=True) """ Explanation: Simulate the ensemble! Now we run the dynamical simulation using an implicit solver. Each model is simulated for 1ns. End of explanation """ m_z0 = np.array([state['z'][0] for state in res.final_state()])/Ms m_z1 = np.array([state['z'][1] for state in res.final_state()])/Ms theta0 = np.arccos(m_z0) theta1 = np.arccos(m_z1) """ Explanation: Compute the final state We use the Results.final_state() function to determine the state of each member of the ensemble after 1ns of simulation. The magnetisation angle is computed as the cosine of the $z$-axis component of magnetisation. End of explanation """ plt.hist(theta0, bins=50, alpha=0.5, normed=True, label='magpy') plt.hist(thetas[0], bins=50, alpha=0.5, normed=True, label='MCMC') plt.legend(); plt.xlabel('Magnetisation angle $\\theta$ (rads)') plt.ylabel('Probability $p(\\theta)$'); """ Explanation: Compare results Single variable comparison Below we compare the magnetisation angle distribution for a single particle as simulated with Magpy and the MCMC algorithm. End of explanation """ fg, axs = plt.subplots(ncols=2, figsize=(11,4), sharey=True) histdat = axs[0].hist2d(theta0, theta1, bins=16, normed=True) axs[1].hist2d(thetas[0], thetas[1], bins=histdat[1], normed=True); for ax, title in zip(axs, ['Magpy', 'MCMC']): ax.set_xlabel('Magnetisation angle $\\theta_0$') ax.set_ylabel('Magnetisation angle $\\theta_1$') ax.set_title(title) fg.colorbar(histdat[3], ax=axs.tolist()); """ Explanation: The results look to be a good match! Join distribution comparison Below we compare the joint distribution of $\theta_0$ and $\theta_1$ (the magnetisation angle of both particles). In other words, this is the probability distribution over the entire state space. It is important to compare the joint distributions because the two particles interact with one another, creating a dependence between the two magnetisation angles. End of explanation """ from scipy.stats import gaussian_kde kde = gaussian_kde(thetas) tgrid_x = np.linspace(theta0.min(), theta0.max(), 16) tgrid_y = np.linspace(theta1.min(), theta1.max(), 16) tgrid_x, tgrid_y = np.meshgrid(tgrid_x, tgrid_y) Z = np.reshape(kde(np.vstack([tgrid_x.ravel(), tgrid_y.ravel()])).T, tgrid_x.shape) fg, ax = plt.subplots(figsize=(9,5)) hist = ax.hist2d(theta0, theta1, bins=16, normed=True) contour = ax.contour(tgrid_x, tgrid_y, Z, cmap='hot_r') fg.colorbar(contour, label='MCMC') fg.colorbar(hist[3], label='Magpy') ax.set_xlabel('Magnetisation angle $\\theta_0$') ax.set_ylabel('Magnetisation angle $\\theta_1$'); """ Explanation: Alternatively compare using a kernel density function An alternative method to visually compare the two distributions is to construct a kernel density estimation one set of results and overaly it on a histogram of the other. End of explanation """ res_noi = ensemble.simulate(end_time=1e-9, time_step=1e-12, max_samples=500, random_state=1002, n_jobs=-1, implicit_solve=True, interactions=False) m_z0 = np.array([state['z'][0] for state in res_noi.final_state()])/Ms m_z1 = np.array([state['z'][1] for state in res_noi.final_state()])/Ms theta0_noi = np.arccos(m_z0) theta1_noi = np.arccos(m_z1) plt.hist(theta0, bins=50, normed=True, alpha=0.4, label='Magpy') plt.hist(theta0_noi, bins=50, normed=True, alpha=0.4, label='Magpy (no inter.)'); plt.hist(thetas[0], bins=50, histtype='step', lw=2, normed=True, alpha=0.4, label='MCMC') plt.legend(); plt.xlabel('Magnetisation angle $\\theta_0$ rads') plt.ylabel('Probability $p(\\theta_0)$'); plt.title('Comparison of $\\theta_0$ distrubition'); fg, ax = plt.subplots(figsize=(9,5)) hist = ax.hist2d(theta0_noi, theta1_noi, bins=16, normed=True) contour = ax.contour(tgrid_x, tgrid_y, Z, cmap='hot_r') fg.colorbar(contour, label='MCMC') fg.colorbar(hist[3], label='Magpy') ax.set_xlabel('Magnetisation angle $\\theta_0$') ax.set_ylabel('Magnetisation angle $\\theta_1$'); """ Explanation: Sanity check: no interactions To ensure that the interactions are having a significant effect on the joint distribution, we simulate the same system but disable the interaction false (simply set interactions=False). End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/05_review/6_deploy.ipynb
apache-2.0
PROJECT = "cloud-training-demos" # Replace with your PROJECT BUCKET = "cloud-training-bucket" # Replace with your BUCKET REGION = "us-central1" # Choose an available region for Cloud MLE TFVERSION = "1.14" # TF version for CMLE to use import os os.environ["BUCKET"] = BUCKET os.environ["PROJECT"] = PROJECT os.environ["REGION"] = REGION os.environ["TFVERSION"] = TFVERSION %%bash if ! gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/babyweight/trained_model/; then gsutil mb -l ${REGION} gs://${BUCKET} # copy canonical model if you didn't do previous notebook gsutil -m cp -R gs://cloud-training-demos/babyweight/trained_model gs://${BUCKET}/babyweight/trained_model fi """ Explanation: Deploying and Making Predictions with a Trained Model Learning Objectives - Deploy a model on Google CMLE - Make online and batch predictions with a deployed model Introduction In this notebook, we will deploy the model we trained to predict birthweight and we will use that deployed model to make predictions using our cloud-hosted machine learning model. Cloud ML Engine provides two ways to get predictions from trained models; i.e., online prediction and batch prediction; and we do both in this notebook. Have a look at this blog post on Online vs Batch Prediction to see the trade-offs of both approaches. As usual we start by setting our environment variables to reference our Project and Bucket. End of explanation """ %%bash MODEL_NAME="babyweight" MODEL_VERSION="ml_on_gcp" # Check to see if the model and version already exist, # if so, delete them to deploy anew if gcloud ai-platform models list | grep "$MODEL_NAME \+ $MODEL_VERSION"; then echo "Deleting the version '$MODEL_VERSION' of model '$MODEL_NAME'" yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model=$MODEL_NAME echo "Deleting the model '$MODEL_NAME'" yes | gcloud ai-platform models delete ${MODEL_NAME} else echo "The model '$MODEL_NAME' with version '$MODEL_VERSION' does not exist." fi """ Explanation: Deploy trained model Next we'll deploy the trained model to act as a REST web service using a simple gcloud call. To start, we'll check if our model and version already exists and if so, we'll delete them. End of explanation """ %%bash MODEL_NAME="babyweight" MODEL_VERSION="ml_on_gcp" MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/ | tail -1) echo "Deploying the model '$MODEL_NAME', version '$MODEL_VERSION' from $MODEL_LOCATION" echo "... this will take a few minutes" gcloud ai-platform models create ${MODEL_NAME} --regions $REGION gcloud ai-platform versions create ${MODEL_VERSION} \ --model ${MODEL_NAME} \ --origin ${MODEL_LOCATION} \ --runtime-version $TFVERSION """ Explanation: We'll now deploy our model. This will take a few minutes. Once the cell below completes, you should be able to see your newly deployed model in the 'Models' portion of the ML Engine section of the GCP console. End of explanation """ from oauth2client.client import GoogleCredentials import requests import json MODEL_NAME = "babyweight" MODEL_VERSION = "ml_on_gcp" token = GoogleCredentials.get_application_default().get_access_token().access_token api = "https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict" \ .format(PROJECT, MODEL_NAME, MODEL_VERSION) headers = {"Authorization": "Bearer " + token } data = { "instances": [ { "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39 }, { "is_male": "False", "mother_age": 29.0, "plurality": "Single(1)", "gestation_weeks": 38 }, { "is_male": "True", "mother_age": 26.0, "plurality": "Triplets(3)", "gestation_weeks": 39 }, { "is_male": "Unknown", "mother_age": 29.0, "plurality": "Multiple(2+)", "gestation_weeks": 38 }, ] } response = requests.post(api, json=data, headers=headers) print(response.content) """ Explanation: Use the deployed model to make online predictions To make online predictions, we'll send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances. End of explanation """ %%writefile inputs.json {"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39} {"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39} """ Explanation: When I ran the cell above, the predictions that I received for the four instances were 7.64, 7.17, 6.24 and 6.13 pounds, respectively. Your results might be different. Use model for batch prediction Batch prediction is commonly used when you want to make thousands to millions of predictions at a time. To perform batch prediction we'll create a file with one instance per line and submit the entire prediction job through a gcloud command. To illustrate this, let's create a file inputs.json which has two instances on which we want to predict. End of explanation """ %%bash INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs gsutil cp inputs.json $INPUT gsutil -m rm -rf $OUTPUT gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \ --data-format=TEXT \ --region ${REGION} \ --input-paths=$INPUT \ --output-path=$OUTPUT \ --model=babyweight \ --version=ml_on_gcp """ Explanation: When making batch predictions, we specify the Google Cloud Storage location of the input json file as well as the locatin to deposit the predictions. The cell below submits a batch prediction job to the cloud. We can monitor the status from the 'Jobs' portion of the ML Engine section of the GCP console. Once the jobs shows that it's completed there, we can examine the predictions uploaded to the OUTPUT location we specify below. End of explanation """ !gsutil ls gs://$BUCKET/babyweight/batchpred/outputs !gsutil cat gs://$BUCKET/babyweight/batchpred/outputs/prediction.results* """ Explanation: Check the AI Platform jobs submitted to the GCP console to make sure the prediction job has completed, then let's have a look at the results of our predictions. End of explanation """
empet/PSCourse
MarkovChains.ipynb
bsd-3-clause
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import networkx as nx """ Explanation: Lanturi Markov ireductibile si aperiodice End of explanation """ def GraphTr(Q): G=nx.from_numpy_matrix(Q, create_using=nx.DiGraph()) nx.draw(G, node_color='b', alpha=0.3) """ Explanation: Definim o functie care genereaza graful de tranzitie al unui lant Markov cu $m$ stari, codificate $0,1, \ldots, m-1$, avand matricea de tranzitie $Q$: End of explanation """ def EquilD(Q): Lam, V=np.linalg.eig(Q.T) absLam=np.abs(Lam) j=np.argmax(absLam)#indicele valorii proprii ce are valoarea absoluta maxima v=V[:,j] # extragem vectorul propriu corespunzator lui 1; # calculam distributia de echilibru impartind fiecare coordonata a lui v la #suma coordonatelor: return v/np.sum(v) """ Explanation: Functia EquilD calculeaza distributia de echilibru a unui lant ireductibil si aperiodic ca vector propriu corespunzator valorii proprii 1 a matricii de tranzitie, transpusa: End of explanation """ #Exemplu de definire a grafului de tranzitie a unui Lant Markov #Constructia distributiei de echilibru ca vector propriu corespunzator valorii 1 # Generarea sirului pi_n, cu n suficient de mare pia=np.array([0.3, 0.45, 0.25]) #distributia initiala de probabilitate Q=[[0, 3./4, 1./4], [1./3, 0, 2./3], [0, 1.0, 0]] Q=np.array(Q)# am introdus elementele lui Q #intr-o lista de liste si apoi am convertit la array #Distributia de echilibru calculata ca un vector propriu al matricii Q transpus: GraphTr(Q) pie=EquilD(Q) #Distributia de echilibru ca o aproximatie a limitei sirului pi_n n=200 for i in range (n): pia=np.dot(pia,Q) print 'Distributia de echilibru calculata ca vector proriu este:\n', pie print 'Distributia aproximata de sirul pi_n este:\n',pia.round print 'norma diferentei dintre cele doua:\n', np.linalg.norm(pie-pia) """ Explanation: Exemplul 1 End of explanation """ Q=[[0.4, 0, 0.3, 0.3], [0, 0.5, 0.3, 0.2], [0, 0.3, 0.3, 0.4], [0.2, 0.2, 0.45, 0.15]] Q=np.array(Q) print Q GraphTr(Q) """ Explanation: Exemplul 2 Un lant Markov cu 4 stari are matricea de tranzitie $Q$. End of explanation """ pii=np.array([0.25, 0.25, 0.25, 0.25])#distributia initiala de probabilitate """ Explanation: Precizam ca desi matricea de tranzitie are elemente nenule pe diagonala principala, networkx nu afiseaza si buclele $(0,0)$, $(1,1)$, $(2,2)$, $(3,3)$ (nu este implementata afisarea grafica a buclelor) End of explanation """ for i in range(10): print pii.round(4) pii=np.dot(pii, Q) """ Explanation: Calculam si afisam distributiile de probabilitate $\pi_n$ cu $n=\overline{0,9}$. Apoi calculam fara a afisa $\pi_n$ cu $n=\overline{10,49}$. Incepand de la $\pi_{50}$ afisam din nou: End of explanation """ for i in range(10,50): pii=np.dot(pii, Q) for i in range(50, 60): print pii.round(4) pii=np.dot(pii, Q) """ Explanation: Observam ca pe masura ce $n$ creste, se schimba probabilitatile de vizita a nodurilor 0,1,2,3. Initial, fiecare nod avea aceeasi probabilitate de a fi vizitat. Dupa 9 pasi probabilitatea ca nodul 0 sa fie vizitat a scazut de la 0.25 cat era initial, la 0.0879, in timp ce pentru nodurile 1,2, 3 a crescut. End of explanation """ print np.sum(pii) """ Explanation: Se observa ca distributia de probabilitate a vizitelor nodurilor s-a stabilizat si deci distributia de echilibru experimentala calculata cu 4 zecimale este $\pi=[0.0878, 0.3091, 0.3395, 0.2635]$ Suma coordonatelor este: End of explanation """ pit= EquilD(Q) print pit print np.sum(pit) """ Explanation: si deci $\pi$ este vector probabilist. $\pi(k)$, $k=0,1,2,3$, este probabilitatea de vizita asimptotica a nodului $k$. Sa calculam acum distributia de echilibru ca vector propriu corespunzator valorii proprii 1, a matricii $Q^T$: End of explanation """ print np.linalg.norm(pii-pit) """ Explanation: Norma diferentei dintre cele doua distributii de echilibru, pii, dedusa in urma plimbarii aleatoare pe graf, conform matricii de tranzitie $Q$ si cea teoretica (ca vector propriu) este: End of explanation """ pii=np.array([0.2, 0.3, 0.4, 0.1]) for i in range(10): print pii.round(4) pii=np.dot(pii, Q) for i in range(10,50): pii=np.dot(pii, Q) for i in range(50, 60): print pii.round(4) pii=np.dot(pii, Q) """ Explanation: adica foarte mica. Luand o alta distributie initiala de probabilitate si afisand sirurile asociate ca in cazul precedent avem: End of explanation """ Q=[[0, 0, 1, 0, 0, 0, 0, 0], [0,0,0,1, 0,0,0,0], [0,1, 0,0, 0, 0, 0, 0], [1,0,0,0, 0, 0, 0, 0], [0.6, 0, 0, 0, 0, 0.4, 0, 0], [0.2, 0, 0, 0, 0, 0, 0.8, 0], [0.25, 0, 0, 0, 0.2, 0.4, 0, 0.15], [0, 0, 0, 0.65, 0, 0, 0.35, 0]] Q=np.array(Q) print Q """ Explanation: Remarcam ca si in acest caz pentru $n\geq 50$, $\pi_n$ s-a stabilizat la $\pi_n=[ 0.0878, 0.3091, 0.3395, 0.2635]^T$, adica la fel ca in cazul in care am avut distributia uniforma ca distributie initiala de probabilitate. Exemplul 3 Acest exemplu ilustreaza ca daca un lant Markov contine traiectorii periodice, atunci desi matricea $Q^T$ admite valoarea proprie 1, vectorul propriu corespunzator nu e sigur ca are toate coordonatele nenule si de acelasi semn. In plus in acest caz fixand o distributie initiala de probabilitate, sirul corespunzator $\pi_n$, cu $\pi_n^T=\pi_{n-1}^TQ$, nu e convergent. End of explanation """ GraphTr(Q) """ Explanation: Graful asociat este: End of explanation """ Lam, V=np.linalg.eig(Q.T) print Lam.round(2) """ Explanation: Remarcam din graf ca exista un drum periodic de perioada 4, si anume: $0\to2\to1\to3\to 0$. Fie $\pi=[0,0,0,0,0.25, 0.25, 0.25,0.25]$ distributia initiala de probabilitate. Probabilitatea ca miscarea aleatoare pe graf sa inceapa din nodurile 0,1,2,3 este 0 si respectiv in mod uniform poate incepe din nodurile 4,5,6,7. Graful (matricea de tranzitie) nu este ireductibil(a) deoarece nu exista drum de arce intre nodul $3$ de exemplu si nodul $5$. Valorile si vectorii proprii ale matricii $Q^T$ sunt: End of explanation """ absLam=np.abs(Lam) print absLam.round(3) """ Explanation: Mai precis valorile proprii sunt $-1, i, -i, 1, 0.68, -0.49, -0.19, 0$ iar valorile lor absolute: End of explanation """ v=V[:,3] print v.round(3) """ Explanation: Remarcam ca nu exista un unic maxim al valorilor absolute, ci patru. Matricea $Q$ fiind stochastica pe linii are sigur valoarea proprie 1 si vectorul ei propriu este: End of explanation """ pii=0.125*np.ones(8) print pii for i in range(35): print pii.round(3) pii=np.dot(pii, Q) """ Explanation: Contrar cazului cand matricea este ireductibila si aperiodica, in acest caz vectorul propriu corespunzator valorii 1, nu are toate coordonatele nenule. Sa investigam acum distributiile de probabilitate la momentele $n=1,2,\ldots$, cand distributia initiala este distributia uniforma: End of explanation """ pii=np.array([0,0,0,0,1,0,0,0]) for i in range(35): print pii.round(3) pii=np.dot(pii,Q) """ Explanation: Remarcam ca in loc ca sirul $\pi_n$ sa tinda la o limita el are 4 subsiruri ce tind la limite diferite (ultimele 4 distributii se repeta dupa un rang $n$) Luam acum o alta distributie initiala de probabilitate: $\pi=[0,0,0,0,1,0,0,0]^T$ si calculam sirul distributiilor $\pi_n$ asociat: End of explanation """ from IPython.display import Image Image(filename='Imags/MarkovAbsC.png') """ Explanation: Remarcam din nou ca ultimele 4 distributii se repeta incepand de la un anumit rang. Prin urmare exista 4 subsiruri convergente ale sirului $(\pi_n)$ si limitele depind de distributia initiala. Acest exemplu ilustreaza ce importante sunt ipotezele de matrice ireductibila si aperiodica in asigurarea unicitatii distributiei de echilibru si a coordonatelor nenule si de acelasi semn pentru vectorul propriu al lui $Q^T$, corespunzator valorii 1. Lanturi Markov absorbante Analizam doua lanturi Markov absorbante ce modeleaza doua metode de forwardare a pachetelor de informatie in relele wireless (vezi descrierea in lista de probleme relativ la lanturi Markov). Cooperative ARQ End of explanation """ def MatrixC(n, p): Q=[] for i in range(n): Q.append([]) for i in range(n): [Q[i].append(0) for j in range(n)] for i in range(n-2): Q[i][i:i+3]=[(1-p)**2, p*(1-p), p] Q[n-2] [n-2:n]=[(1-p),p] Q[n-1][n-1]=1 return np.array(Q) """ Explanation: Matricea de tranzitie pentru o astfel de retea cu $n$ noduri si probabilitatea forwardarii cu succes dintr-un nod i spre nodul i+2, $i=\overline{0, n-3}$, egala cu p, este generata de functia: End of explanation """ Q=MatrixC(5, 0.85) print Q """ Explanation: Pentru cazul $n=5$ vizualizat in imaginea de mai sus si $p=0.85$, matricea de tranzitie este: End of explanation """ T=Q[:4,:4] print T """ Explanation: Se observa si din graf si din matricea de tranzitie ca nodul $n=4$, destinatia pachetului, este absorbant. Aceasta este o forma standard a matricii de tranzitie oarecum opusa celei discutate la curs. In acest exemplu primele noduri sunt tranzitorii si ultimul este absorbant. In aceasta scriere matricea tranzitorie $T$ este formata din elementele de intersectie ale liniilor si coloanelor $0,1,2,3$: End of explanation """ I=np.eye(4) N=np.linalg.inv(I-T) print N.round(4) """ Explanation: Matricea fundamentala $N$ este: End of explanation """ timp=np.sum(N[0]) print timp """ Explanation: Suma elementelor de pe linia 0 a matricii N este timpul mediu petrecut de un pachet in retea, inainte de ajunge la destinatie (nodul 4). End of explanation """ Image(filename='Imags/MarkovAbsNon.png') """ Explanation: Non-Cooperative ARQ Protoculul de forwardare este in acest caz ilustrat pe o retea de 5 noduri, indexate $0,1,2,3,4$: End of explanation """ def MatrixNonC(n,p): Q=[] for i in range(n): Q.append([]) for i in range(n): [Q[i].append(0) for j in range(n)] for i in range(n-2): Q[i][i:i+3]=[(1-p), 0, p] Q[n-2][n-2:n]=[(1-p),p] Q[n-1][n-1]=1.0 return np.array(Q) print MatrixNonC(5,0.85) """ Explanation: Matricea de tranzitie este in acest caz: End of explanation """ def simul(pr): k=0 F=pr[0] u=np.random.random() while(u>F): k+=1 F=F+pr[k] return k import scipy.linalg as spl n=100 p=0.85 Q=MatrixC(n,p)#matricea de tranzitie conform primului protocol de forwardare T=Q[:n-1,:n-1] I=np.eye(n-1) NC=spl.inv(I-T) perfC=np.sum(NC[0]) S=MatrixNonC(n,p) T=S[:n-1,:n-1] NN=spl.inv(I-T) perfNC=np.sum(NN[0]) print 'timpul mediu petrecut in retea de un pachet forwardat cf protocolului CARQ', perfC print 'timpul mediu petrecut in retea de un pachet forwardat cf protocolului NCARQ', perfNC """ Explanation: In continuare comparam performanta teoretica si experimentala a celor doua metode de forwardare a pachetelor intr-o retea de $n=100$ noduri si probabilitatea, $p$, de forwardare de la nodul $i$ la nodul $i+2$, $i=\overline{0,n-3}$, mai apropiat de destinatie, egala cu $p=0.85$. O masura a performantei teoretice este timpul mediu petrecut in retea inainte de ajunge la destinatie. Deci dintre cele doua protocoale va fi mai performant cel pentru care suma elementelor de pe linia $0$ a matricii fundamentale, $N$, mai mica. Performanta experimentala se evalueaza simuland mai multe forwardari (nr) si dupa fiecare calculand numarul de noduri disticte sau nu, vizitate de pachet inainte de ajunge la destinatie. Dupa cele nr forwardari simulate, se calculeaza media aritmetica a numarului de noduri vizitate. Fiecare nod este vizitat intr-o unitate de timp si deci aceasta medie este timpul mediu petrecut in retea. Pentru a calcula matricea fundamentala a lantului, $N=(I-T)^{-1}$, apelam functia inv din scipy.linalg, deoarece este mai performanta decat functia np.linalg.inv. Pentru a simula forwardarea, adica lantul Markov, definim functia: End of explanation """ def forward(Q): n=Q.shape[0] nod=0 pr=Q[0]# pachetul porneste din nodul 0 traj=[nod] while(nod!=n-1): nod=simul(pr) traj.append(nod) pr=Q[nod] return traj traj1=forward(Q) traj2=forward(S) print 'traiectoria pachetului conform protocolului CARQ\n', traj1 print 'nr de noduri vizitate in fiecare unitate de timp:', len(traj1)-1 print 'traiectoria pachetului conform protocolului NCARQ\n', traj2 print 'nr de noduri vizitate in fiecare unitate de timp:', len(traj2)-1 """ Explanation: Deci protocolul CARQ este mai performant, din punct de vedere teoretic. Sa verificam si experimental. mai intai generam o traiectorie a unui pachet conform fiecarui protocol si o afisam: End of explanation """ nr=50 lentr1=0 lentr2=0 for k in range(nr): traj1=forward(Q) traj2=forward(S) lentr1+=len(traj1)-1 lentr2+=len(traj2)-1 print 'Media timpului petrecut in retea de un pachet forwardat cf CARQ este: ', float(lentr1)/nr print 'Media timpului petrecut in retea de un pachet forwardat cf NCARQ este: ', float(lentr2)/nr """ Explanation: Forwardam 50 pachete si calculam media numarului de noduri vizitate conform fiecarui protocol: End of explanation """ from IPython.core.display import HTML def css_styling(): styles = open("./styles/custom.css", "r").read() return HTML(styles) css_styling() """ Explanation: Media teoretica era respectiv 54.50, respectiv 58.82. Prin urmare atat teoretic cat si experimental protocolul CARQ este mai performant. End of explanation """