Negative Binomial Distribution
An experiment that consists of a series of independent, constant-probability Bernoulli trials, with the random variable defined as the number of trials, x, before reaching the kth success.
The general probability mass function for a negative binomial distribution is:
The exponents for p and (1 - p) follow clearly from the definition of the random variable; the additional factor of is needed because other than the final (kth) success on the xth trial, the other successes can occur on any of the remaining (x - 1) trials, and there are ways for that to happen.
The derivation of the mean and variance of a negative binomial distribution are not in the text, but they are:
Early in the design phase of the development of a thin-film device, only 40% of the prototypes are functional. You need 5 functional devices for lifetime testing. You check device after device until you have the 5 you need. What is the probability you get all five in the first five you check? in the first six you check?
An experiment that consists of a series of independent, constant-probability Bernoulli trials, with the random variable defined as the number of trials before reaching the first success.
This is a negative binomial where k = 1.
Suppose two dice are tossed, and a success is defined as rolling "snake eyes" - two 1s. How many times do we have to roll until we finally get "snake eyes"?
Graph this distribution
Suppose p = .4
The single parameter, p, defines the distribution. X, the random variable, tells us what the values on the abscissa will be.
The general probability mass function for a geometric distribution is:
The mean and variance of a geometric distribution are easy to calculate, but the derivation is not in your text:
The geometric distribution has the "lack of memory" property; that is, the probability of a success doesn't change from trial to trial (independent, equal-probability Bernoulli trials). Just because you've had a string of 0s doesn't mean you are "due" for a 1. This is the basis for a tricky, but oh-so-easy type of exam question: "In a certain manufacturing process it is known that, on the average, 1 in every 100 items is defective. Two hundred items have been found to be good. What is the probability that the 201st item will be defective?"
The Poisson process is an underlying process, and a continuous distribution. But when we apply the sampling scheme where the outcome is counts, the Poisson distribution is a discrete distribution.
Experiments consist of the following:
In each of the experiments, a process happens at some rate (rate of traffic flow, rate of flaws, rate of people coming to the line, rate of rainfall), but with a random occurrence for each instance. If you can model your engineering process according to the definition below, you have a Poisson process, and the statistics of it are well known.
The probabilities of counts, for instance P(X = 0), P(X = 1), P(X = 2), ..., are related to the rate of the process.
We will not try to derive the probability mass function for the Poisson distribution (we could, except for the time), but will simply write it down:
The mean and variance of a Poisson distribution are:
There is a cumulative probability table in your text, Table A2, as there was for the binomial
Example: p162 5.18
A true Poisson process also has a "lack of memory" property; for example, if the number of oil tankers arriving, etc., then if 20 oil tankers arrive today, what is the probability that some tankers will have to be turned away tomorrow?
The Poisson rate, , is scalable to the requirements of a problem.
Remember: the expected value and the mean of the "population" are the same thing.
Remember: often it's easier to calculate the opposite of what you want, then subtract from 1.
F(x) for the Poisson doesn't have a nice form. This makes the tables even more valuable.
Poisson approximation to the binomial
Poisson is viewed as a limiting form for the binomial. The binomial distribution is based on the discrete Bernoulli trial with independence; the Poisson distribution is based on the continuous Poisson process (with "independence").
Convenient, use when n is large and p is small and the binomial is tough to calculate.
Method: the mean of the binomial equals np. The mean of the Poisson equals lambda. Set lambda equal to np (check that the correct rate unit is being used) and solve the problem.
Example: p164 5.20
Underlying processes, parent distributions, sampling schemes, and sampling distributions