11.4: The Normal Distribution (2024)

  1. Last updated
  2. Save as PDF
  • Page ID
    130973
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vectorC}[1]{\textbf{#1}}\)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    In this section, we are going to study what is arguably the most important continuous random variable - the normal random variable. As usual, our process is to first define the density of this random variable.

    Definition: The Normal Random Variable

    Definition: Let \(\mu\) and \(\sigma\) be numbers satisfying \(- \infty < \mu < \infty\) and \(\sigma > 0\) (so \(\mu\) can be any real number and \(\sigma\) is any positive real number). Suppose \(X\) is a random variable who has the following density function:
    \[
    f(x) =
    \displaystyle \frac{1}{\sigma \sqrt{2\pi}}e^{ -\dfrac{1}{2} \bigg(\dfrac{x - \mu}{\sigma} \bigg)^2 } ~~ \text{if} ~ - \infty < x < \infty
    \]

    or equivalently
    \[
    f(x) = \frac{1}{\sigma \sqrt{2\pi}} \exp\bigg[ - \frac{(x-\mu)^2}{2\sigma^2} \bigg] ~~ \text{if} ~ - \infty < x < \infty
    \]
    Then we say \(X\) is a Normal random variable with parameters \(\mu\) and \(\sigma^2\) we write \(X \sim \mathcal{N}(\mu, \sigma^2)\)

    We will see why the Normal distribution is important in the next section. For now, allow us to discuss the properties of this distribution. Although the density presented in our definition looks extremely complicated, the graph of the density is something you may have seen before:

    curve(dnorm(x, 0, 1), from=-4, to=4 , xlab="",xaxt="n", ylab="",yaxt="n" )

    11.4: The Normal Distribution (1)

    Specifically, the graph of the density has what we refer to as a bell-shaped curve. As with all the other random variables, let us take a moment to consider how each parameter influences the density. Essentially, the first parameter will tell us the "center" of the distribution or the \(x\)-value for which the distribution is symmetric about. Meanwhile, the second parameter tells us how "thick/thin" the distribution is. In order to see this, allow us to graph the density of a few normal random variables. We will first change the value of \( \mu \) while keeping \( \sigma^2 \) the same.

    Allow us to graph the \(\mathcal{N}(0, 1)\) random variable which has density \[\frac{1}{\sqrt{1} \sqrt{2\pi}}e^{ \displaystyle -\dfrac{1}{2} \bigg(\frac{x - 0}{\sqrt{1}}\bigg)^2 } = \frac{1}{\sqrt{2\pi}}e^{ \displaystyle -\dfrac{x^2}{2}} \nonumber\ \]

    Graphing this function produces the following curve:
    11.4: The Normal Distribution (2)

    Notice for the \(\mathcal{N}(0, 1)\) random variable, the center of the distribution is at \(x =0\). Let us now change the first parameter to see how this affects the graph. Sketching the density of the \(\mathcal{N}(3, 1)\) random variable which has density

    \[\frac{1}{\sqrt{1} \sqrt{2\pi}}e^{ \displaystyle -\dfrac{1}{2} \bigg(\frac{x - 3}{\sqrt{1}}\bigg)^2 } = \frac{1}{\sqrt{2\pi}}e^{ \displaystyle -\dfrac{(x-3)^2}{2}} \nonumber\ \] yields:

    11.4: The Normal Distribution (3)

    Notice for the \(\mathcal{N}(3, 1)\) random variable, the center of the distribution is at \(x = 3\). And finally, sketch the density of the \(\mathcal{N}(-3, 1)\) random variable which has density \[\frac{1}{\sqrt{1} \sqrt{2\pi}}e^{ \displaystyle -\dfrac{1}{2} \bigg(\frac{x + 3}{\sqrt{1}}\bigg)^2 } = \frac{1}{\sqrt{2\pi}}e^{ \displaystyle -\dfrac{(x+3)^2}{2}} \nonumber\ \] yields:

    11.4: The Normal Distribution (4)

    Notice for the \(\mathcal{N}(-3, 1)\) random variable, the center of the distribution is at \(x = -3\). Placing all three densities side by side yields the following:

    11.4: The Normal Distribution (5)

    Intuitively, we see the first parameter controls the center of the distribution.

    We now perform a similar process to the second parameter. Let us again start with the density of the \(\mathcal{N}(0, 1)\) random variable:

    11.4: The Normal Distribution (6)

    Graphing the density of the \(\mathcal{N}(0, 4)\) random variable yields:

    11.4: The Normal Distribution (7)

    Intuitively, the second parameter represents the thickness and since the second parameter increased, the distribution is now thicker. Finally, graphing the density of the \(\mathcal{N}(0, 0.25 )\) random variable random variables yields:

    11.4: The Normal Distribution (8)

    Intuitively, the second parameter represents the thickness and since the second parameter decreased, the distribution is now thinner.

    To get a side by side comparison, let us draw the densities of the \(\mathcal{N}(0, 1)\), \(\mathcal{N}(0, 4)\) and \(\mathcal{N}(0, 0.25)\) all on one graph:

    11.4: The Normal Distribution (9)

    Now that we have a feel for how the two parameters affect the graph of the density, we present the following theorem.

    Theorem \(\PageIndex{1}\)

    Theorem: If \(X \sim \mathcal{N}(\mu, \sigma^2)\) then
    \begin{align*}
    \mathbb{E}[X] = \mu ~~\text{and} ~~ \mathbb{V}ar[X] = \sigma^2
    \end{align*}

    We have considered five different normal random variables in the above discussion. However, there are infinitely many normal random variables we can have. Among the infinitely many of them, there is one normal random variable that we designate to be the standard, and so we will single out this particular normal random variable and we will attach a special name to it. In fact, to stress it's distinction, we will assign this specific normal random variable it's own letter and we will give the pdf and cdf special letters too. Recall that most times we denote a random variable by \(X\), the density by \(f\) and the cdf by \(F\). But since this random variable is so important, we will denote it by \(Z\) and we denote the density by the Greek letter \(\phi\) (lowercase phi) and the cdf by the Greek letter \(\Phi\) (uppercase phi).

    Definition: The Standard Normal Random Variable

    Definition: We call the \(\mathcal{N}(0,1)\) the standard normal random variable and write \(Z \sim \mathcal{N}(0,1)\). It's density is given by
    \[
    \phi(z) =
    \frac{1}{\sqrt{2\pi}} \displaystyle e^{-\dfrac{z^2}{2} } ~~ \text{if} ~ - \infty < z < \infty
    \]
    and its cdf is given by
    \[
    \Phi(z) = \int_{- \infty}^{z} \dfrac{1}{\sqrt{2\pi}} e^{- \dfrac{t^2}{2}} ~ dt
    \]

    11.4: The Normal Distribution (10)

    With this in mind, allow us to consider the following example that I would like to attempt to answer in three different ways.

    Example (Method 1) \(\PageIndex{1}\)

    Suppose \(Z \sim \mathcal{N}(0,1)\).
    Find \(P(Z \leq 1)\).
    Find \(P(Z \geq 1.5)\).
    Find \(P(-1.36 \leq Z \leq 1.72)\).

    First Attempt at a Solution:

    Allow us to try to use the definition of a continuous random variable to answer this problem. By the definition of a continuous random variable, we see that \begin{align*}
    P(Z \leq 1) = \Phi(1) = \int_{- \infty}^{1} \frac{1}{\sqrt{2\pi}} e^{- \frac{t^2}{2}} ~ dt = \lim_{t \rightarrow - \infty} \bigg[ \int_{t}^{1} \frac{1}{\sqrt{2\pi}} e^{- \frac{t^2}{2}} ~ dt \bigg]
    \end{align*}

    Can we find an antiderivative of \( \frac{1}{\sqrt{2\pi}} e^{- \frac{t^2}{2}} \)? Unfortunately, the answer is no. Although the antiderivative exists, this antiderivative is not an elementary function which means no matter how hard we try, or what integration technique we use, we will never be able to produce the antiderivative of \( \frac{1}{\sqrt{2\pi}} e^{- \frac{t^2}{2}} \) in terms of functions we know. (We can, however, express the \(\int \frac{1}{\sqrt{2\pi}} e^{- \frac{t^2}{2}} \) as an infinite series).

    \begin{align*}
    P(Z \leq 1) = \Phi(1) = \int_{- \infty}^{1} \frac{1}{\sqrt{2\pi}} e^{- \frac{t^2}{2}} ~ dt = \lim_{t \rightarrow - \infty} \bigg[ \int_{t}^{1} \frac{1}{\sqrt{2\pi}} e^{- \frac{t^2}{2}} ~ dt \bigg] = ~ ????
    \end{align*}

    The point remains that we cannot perform the above computation by hand. A similar story holds for the following two parts of the above example.

    \begin{align*}
    P(Z \geq 1.5) = 1 - P(Z < 1.5) = 1 - P(Z \leq 1.5) = 1 - \Phi(1.5) = 1 - \int_{- \infty}^{1.5} \frac{1}{\sqrt{2\pi}} e^{- \frac{t^2}{2}} ~ dt = ~ ????
    \end{align*}

    \begin{align*}
    P(-1.36 \leq Z \leq 1.72) = \Phi(1.72) - \Phi(-1.36) = \int_{- 1.36}^{1.72} \frac{1}{\sqrt{2\pi}} e^{- \frac{t^2}{2}} ~ dt = ~ ????
    \end{align*}

    In summary, we are stuck since in each case, we run into an integral which we cannot evaluate and so we fail to obtain a numerical answer.

    We have failed our first attempt at the above problem and so the question now becomes, is there a way we can get around this?

    Luckily, the answer is yes! Specifically, what we can do is the following: For ONLY the standard normal random variable, we can create a chart which will give us the cdf, \(\Phi(z)\), for important/common values. Of course, this chart has to be compromise. What do I mean by this? You see, the support of \(Z\) has infinitely many real numbers. Since there are infinitely many real numbers, then technically we would need an infinite chart which is not exactly practical. Thus, our chart will give the CDF for only specific/important/common values of the standard normal random variable. Using this chart is Method 2:

    11.4: The Normal Distribution (11)11.4: The Normal Distribution (12)

    Chart provided by: https://www.math.arizona.edu/~jwatkins/normal-table.pdf

    Example (Method 2) \(\PageIndex{1}\)

    Suppose \(Z \sim \mathcal{N}(0,1)\).
    Find \(P(Z \leq 1)\).
    Find \(P(Z \geq 1.5)\).
    Find \(P(-1.36 \leq Z \leq 1.72)\).

    Second Attempt at a Solution:

    We now use the standard normal cdf chart presented above.

    \begin{align*}
    P(Z \leq 1) = \Phi(1) = \Phi(1.00) \approx 0.8413
    \end{align*}

    \begin{align*}
    P(Z \geq 1.5) = 1 - P(Z < 1.5) = 1 - P(Z \leq 1.5) = 1 - \Phi(1.5) = 1 - \Phi(1.50) \approx 1 - 0.9332 = 0.0668
    \end{align*}

    \begin{align*}
    P(-1.36 \leq Z \leq 1.72) = \Phi(1.72) - \Phi(-1.36) \approx 0.9573 - 0.0869 = 0.8704
    \end{align*}

    And so we were successful in the above attempt. This approach is what we did in the 1700s/1800s. However, our third method uses a more modern approach - making use of the the calculator commands!

    Example (Attempt 3) \(\PageIndex{1}\)

    Suppose \(Z \sim \mathcal{N}(0,1)\).
    Find \(P(Z \leq 1)\).
    Find \(P(Z \geq 1.5)\).
    Find \(P(-1.36 \leq Z \leq 1.72)\).

    Third Attempt at a Solution:

    The TI-84 is equipped with a normalcdf command. To use it, we note the following:

    Note

    If \(X \sim \mathcal{N}( \mu, \sigma^2) \) then:

    1) \( P( a \leq X \leq b) = \int_{a}^{b} \phi(z) ~ dz = normalcdf(a, b, \mu, \sigma) \)

    2) \( P( X \leq a ) = \int_{- \infty}^{a} \phi(z) ~ dz = normalcdf(-1E99, a, \mu, \sigma) \)

    where \( -1E99 \) is how your calculator approximates \( - \infty \).

    Hence,

    \begin{align*}
    P(Z \leq 1) = normalcdf(-1E99, 1, 0, 1) \approx 0.8413447404
    \end{align*}

    \begin{align*}
    P(Z \geq 1.5) = 1 - P(Z < 1.5) = 1 - P(Z \leq 1.5) = 1 - normalcdf(-1E99, 1.5, 0, 1) \approx 0.668072287
    \end{align*}

    \begin{align*}
    P(-1.36 \leq Z \leq 1.72) = normalcdf(-1E99, a, \mu, \sigma) \approx 0.8703678946
    \end{align*}

    Thus, given the standard normal random variable, I can find any probabilities associated with it. But what if we did not necessarily have the standard normal random variable? For instance, what if I said \(X \sim \mathcal{N}(1,4)\). Find \(F(3) = P(X \leq 3)\). How could we do this?

    We could do attempt like we did in Attempt 1. That is, in theory, we know the answer should be the integral of the density function over the appropriate region. Geometrically, we see the density looks like:

    11.4: The Normal Distribution (13)

    And so
    \begin{align*}
    P(X \leq 3) = \int_{-\infty}^{3} \frac{1}{2 \sqrt{2\pi}}e^{- \dfrac{1}{2} \bigg(\dfrac{t-1}{2} \bigg)^2} ~ dt = ~~ ????
    \end{align*}

    And again, this becomes a function which we cannot integrate by hand.

    Alternatively, we could develop a chart for the cumulative distribution function of the \( \mathcal{N}(1,4)\) but does this sound practical? If every time we had a normal random variable, we had to lookup a chart, then how many different charts would we have? Infinitely many of them!

    And so we need a more efficient method. The quick answer is to use the modern approach with our calculators:

    Example \(\PageIndex{1}\)

    If \(X \sim \mathcal{N}(1,4) \), then find \(P(X \leq 3)\).

    Answer

    \begin{align*}
    P(X \leq 3) = normalcdf(-1E99, 3, 1, 2) \approx 0.8413447404.
    \end{align*}

    However, how did mathematicians and statisticians in the 1800s answer this question when they did not have a calculator readily available to them? The answer comes in the form of the following theorem:

    Theorem: The Standardization Process \(\PageIndex{1}\)

    Theorem: If \(X \sim \mathcal{N}(\mu, \sigma^2)\) then
    \begin{align*}
    \frac{X- \mu}{\sigma} = Z
    \end{align*}
    Furthermore, if \(F\) denotes the cdf of \(X\), then

    \begin{align*}
    F(x) = \Phi \bigg( \frac{X-\mu}{\sigma} \bigg)
    \end{align*}

    Interpretation of the Standardization Process

    Let us suppose we start with the \(\mathcal{N}(1,4)\) random variable as pictured below:

    11.4: The Normal Distribution (14)

    If we were to apply the following linear transformation to \(X\), \(\frac{X - 1}{2}\), then the random variable becomes

    11.4: The Normal Distribution (15)

    which is precisely \(Z\), the standard normal random variable!

    Moreover, this is true for any normal random variable. If we start with any normal random variable \(X\) with parameters \( \mu\) and \( \sigma^2\), then the random variable \(\frac{X - \mu}{\sigma}\) becomes \( Z \).

    In summary, given any normal random variable, if we subtract the first parameter and divide by the square root of the second parameter, then we will obtain the standard normal random variable, \(Z\).

    Using the above theorem, allow us to answer the question.

    Example \(\PageIndex{1}\)

    If \(X \sim \mathcal{N}(1,4) \), then find \(P(X \leq 3)\).

    Answer

    \begin{align*}
    P(X \leq 3) &= P \bigg(\frac{X-1}{2} \leq \frac{3-1}{2}
    \bigg) \\
    &= P(Z \leq 1) \\
    &= \Phi(1) ~~ \text{via the standard normal table} \\
    &\approx 0.8413
    \end{align*}

    To summarize, these techniques, allow us to take a look at one final example.

    Example \(\PageIndex{1}\)

    If \(X \sim \mathcal{N}(120, 2^2)\), then find \(P(116 \leq X \leq 118)\).

    Answer

    Before we answer this, let us understand the picture: The \(\mathcal{N}(120, 2^2)\) looks like the following:

    11.4: The Normal Distribution (16)

    Here are the four ways we can attempt to answer this problem, of which the last three attempts are preferred.

    1) Of course we could say
    \begin{align*}
    P(116 \leq X \leq 118) = \int_{116}^{118} \frac{1}{2 \sqrt{2\pi}} e^{ -\dfrac{1}{2} \bigg(\dfrac{t-120}{2} \bigg)^2} ~ dt
    \end{align*}

    but then we would not be able to integrate this.

    2) Alternatively we can perform the simple calculator command:
    \begin{align*}
    P(116 \leq X \leq 118) = F(118) - F(116) &= normalcdf(116, 118, 120, 2) \\
    &\approx 0.1359051975
    \end{align*}

    3) Or we may use the standardization process with the calculator. \begin{align*}
    P(116 \leq X \leq 118) &= P \bigg( \frac{116-120}{2} \leq \frac{X-120}{2} \leq \frac{118-120}{2} \bigg) \\
    &= P(-2 \leq Z \leq -1) \\
    &= \Phi(-1) - \Phi(-2) \\ &= normalcdf(-2, -1, 0, 1) \\
    &\approx 0.1359051975
    \end{align*}

    4) Or we may use the standardization process with the calculator. \begin{align*}
    P(116 \leq X \leq 118) &= P \bigg( \frac{116-120}{2} \leq \frac{X-120}{2} \leq \frac{118-120}{2} \bigg) \\
    &= P(-2 \leq Z \leq -1) \\
    &= \Phi(-1) - \Phi(-2) \\
    &\approx 0.1587 - 0.0228 \\
    &= 0.1359
    \end{align*}

    11.4: The Normal Distribution (2024)
    Top Articles
    Latest Posts
    Article information

    Author: Fr. Dewey Fisher

    Last Updated:

    Views: 5988

    Rating: 4.1 / 5 (62 voted)

    Reviews: 85% of readers found this page helpful

    Author information

    Name: Fr. Dewey Fisher

    Birthday: 1993-03-26

    Address: 917 Hyun Views, Rogahnmouth, KY 91013-8827

    Phone: +5938540192553

    Job: Administration Developer

    Hobby: Embroidery, Horseback riding, Juggling, Urban exploration, Skiing, Cycling, Handball

    Introduction: My name is Fr. Dewey Fisher, I am a powerful, open, faithful, combative, spotless, faithful, fair person who loves writing and wants to share my knowledge and understanding with you.