{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# SMIPP 21/22 - Exercise Sheet 9\n", "\n", "## Prof. Dr. K. Reygers, Dr. R. Stamen, Dr. M. Völkl\n", "\n", "## Hand in by: Thursday, January 13th: 12:00\n", "### Submit the file(s) through the Übungsgruppenverwaltung\n", "\n", "\n", "### Names (up to two):\n", "### Points: " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 9.1 Simple linear regression with scikit-learn (10 points)\n", "\n", "In this exercise we use Francis Galton's famous [data on family heights](https://www.randomservices.org/random/data/Galton.html) to get acquainted with [scikit-learn](https://scikit-learn.org).\n", "\n", "a) Define a [linear regression model](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html?highlight=linear%20regression#sklearn.linear_model.LinearRegression) and use it describe the son's height ($y$) as a function of the father's height ($x$). Fit the model to the data and print the coefficients.\n", "\n", "b) Make a scatter plot of the data and superimpose the fitted linear function. In addition, plot the line $y = x$. \n", "\n", "You will see that tall fathers tend to have sons that are smaller than them. Correspondingly, small fathers tend to have son's that are slighly taller than their father's. Galton called this \"regression to mediocrity\". Today, this is know as [\"regression towards the mean\"](https://en.wikipedia.org/wiki/Regression_toward_the_mean). This is where the term \"regression\" comes from.\n", "\n", "Add your code below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "from sklearn import linear_model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "filename = \"https://www.randomservices.org/random/data/Galton.txt\"\n", "df = pd.read_csv(filename, engine='python', sep='\\s+')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# x: father's height, y: son's height\n", "xa = df[df['Gender']=='M']['Father'].values\n", "x = np.reshape(xa, (-1, 1))\n", "y = df[df['Gender']=='M']['Height'].values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### a)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# define linear model, fit the data and print the coefficients\n", "\n", "### Your code here ###\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# use the \"predict\" method of the model to get the model prediction\n", "hf_tmp = np.linspace(60, 80, 1000)\n", "hf = np.reshape(hf_tmp, (-1, 1))\n", "\n", "### Your code here ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### b)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Plot the data and the model\n", "\n", "### Your code here ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 9.2 Classification of air showers measured with the MAGIC telescope (15 points)\n", "\n", "The [MAGIC telescope](https://en.wikipedia.org/wiki/MAGIC_(telescope)) is a Cherenkov telescope situated on La Palma, one of the Canary Islands. The [MAGIC machine learning dataset](https://archive.ics.uci.edu/ml/datasets/magic+gamma+telescope) can be obtained from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php).\n", "\n", "The task is to separate signal events (gamma showers) and background events (hadron showers) based on the features of a measured Cherenkov shower.\n", "\n", "The features of a shower are:\n", "\n", " 1. fLength: continuous # major axis of ellipse [mm]\n", " 2. fWidth: continuous # minor axis of ellipse [mm] \n", " 3. fSize: continuous # 10-log of sum of content of all pixels [in #phot]\n", " 4. fConc: continuous # ratio of sum of two highest pixels over fSize [ratio]\n", " 5. fConc1: continuous # ratio of highest pixel over fSize [ratio]\n", " 6. fAsym: continuous # distance from highest pixel to center, projected onto major axis [mm]\n", " 7. fM3Long: continuous # 3rd root of third moment along major axis [mm] \n", " 8. fM3Trans: continuous # 3rd root of third moment along minor axis [mm]\n", " 9. fAlpha: continuous # angle of major axis with vector to origin [deg]\n", " 10. fDist: continuous # distance from origin to center of ellipse [mm]\n", " 11. class: g,h # gamma (signal), hadron (background)\n", "\n", "g = gamma (signal): 12332\n", "h = hadron (background): 6688\n", "\n", "For technical reasons, the number of h events is underestimated.\n", "In the real data, the h class represents the majority of the events.\n", "\n", "You can find further information about the MAGIC telescope and the data discrimination studies in the following [paper](https://reader.elsevier.com/reader/sd/pii/S0168900203025051?token=8A02764E2448BDC5E4DD0ED53A301295162A6E9C8F223378E8CF80B187DBFD98BD3B642AB83886944002206EB1688FF4) (R. K. Bock et al., \"Methods for multidimensional event classification: a case studyusing images from a Cherenkov gamma-ray telescope\" NIM A 516 (2004) 511-528) (You need to be within the university network to get free access.) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "from sklearn.model_selection import train_test_split" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "filename = \"https://www.physi.uni-heidelberg.de/~reygers/lectures/2020/smipp/magic04_data.txt\"\n", "df = pd.read_csv(filename, engine='python')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# use categories 1 and 0 insted of \"g\" and \"h\"\n", "df['class'] = df['class'].map({'g': 1, 'h': 0})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### a) Create for each variable a figure with a plot for gammas and hadrons overlayed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df0 = df[df['class'] == 0] # hadron data set\n", "df1 = df[df['class'] == 1] # gamma data set\n", "\n", "print(len(df0),len(df1))\n", "\n", "### YOUR CODE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### b) Create training and test data set. The tast data should amount to 50\\% of the total data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y = df['class'].values\n", "X = df[[col for col in df.columns if col!=\"class\"]]\n", "\n", "### YOUR CODE ### \n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### c) Define the logistic regressor and fit the training data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import linear_model\n", "\n", "# define logistic regressor\n", "\n", "### YOUR CODE ###\n", "\n", "logreg=\n", "\n", "# fit training data\n", "\n", "### YOUR CODE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### d) Determine the Model Accuracy, the AUC score and the Run time" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import roc_auc_score\n", "\n", "### YOUR CODE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### e) Plot the ROC curve (Backgropund Rejection vs signal efficiency)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "from sklearn.metrics import roc_curve\n", "%matplotlib inline\n", "\n", "y_pred_prob = logreg.predict_proba(X_test) # predicted probabilities\n", "\n", "### YOUR CODE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### f) Plot the Signal efficiency vs. the Background efficiency and compare it to the corresponding plot in the paper" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### YOUR CODE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 9.3 Linear discriminant analysis and Gaussian probability densities (10 points)\n", "\n", "Show that for Gaussian probability densities with the same covariance for signal and background, the optimal decision boundary is linear and equals the one given by the Fisher discriminant\n", "\n", "a) Consider a signal and a background distribution described by multi-variate Gaussians with the same covariance matrix but different means $\\vec u_s$ and $\\vec \\mu_b$. Write down the likelihood ratio (up to a proportionality constant) which, according to the Neyman-Pearson lemma, gives the best possible classifier $y(\\vec x)$.\n", "\n", "b) Write down the logarithm of the likelihood ratio and show that this is a linear function in $\\vec x$\n", "\n", "c) Show that the coefficients of the linear classifier are (up to an arbitrary factor and offset) the same as the ones obtained in the lecture for the Fisher discriminant." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Solution:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 9.4 Cheat Sheet (5 points)\n", "\n", "For the exam you will be allowed to bring one A4 sheet (2 sided). Please prepare a sheet which covers the material which has been discussed so far. On January 10 and 11th. we will write a test exam with a few problems during the tutorials. You can bring this sheet to the test exam.\n", "\n", "On the lecture website you will find a zip file which contains latex files (the main file: smipp_cheatsheet.tex and one file per chapter in the contents directory). You can use these files to prepare your cheat sheet, if you want. Handwritten sheets are fine as well of course." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.3" } }, "nbformat": 4, "nbformat_minor": 4 }