3. Old stuff
          3.1. Old pharm stuff (pre 2009)
              3.1.2. Statistics
 3.1.2.3. Principles of probability and inference 

Principles of probability and inference

[MG1:Chp 3]

 

  • A sample is a group taken from a population

Inferential statistics

[MG1:p20-21]

  • Data are collected and analysed from a sample to make inference about the larger population.

Fallacy of affirming the consequent

  • In logic, it is preferable to refute a hypothesis rather than try to prove one.
  • It is deductively valid to reject a hypothesis when testable implications are found to be false
    * "Modus tollens"
  • It is NOT deductively valid to accept a hypothesis when testable implications are found to be true
    * "Fallacy of affirming the consequent"

In other words,

  • Statement: If P, then Q
  • Q is not true
    --> P can be rejected
  • Q is true
    --> P cannot be rejected, but nor is P proven either. (i.e. the validity of P is still unknown)

 

Example from wikipedia regarding fallacy of "affirming the consequent"

  • If Bill Gates owns Fort Knox, then he is rich
  • Bill Gates is rich
  • Therefore he owns Fort Knox

NB:

  • But if Bill Gates is not rich, it can be safely concluded that he does not own Fort Knox
    * i.e. rejecting the hypothesis of Bil Gates owning Fort Knox

Null hypothesis

[MG1:p21]

  • Null hypothesis (H0) --> Drug has no significant effect
  • Alternative hypothesis (H1) --> Drug causes some effect
  • To test H0, sample is taken and data analysed using an appropriate significant test
    --> A test statistic is derived (e.g. t score, z score)
    --> The test statistic is associated with a certain probability (the P value)
  • P value = The likelihood that the results obtained (or one more extreme) could have occurred (by chance), assuming that H0 is true

Type I error

  • If P is less than an arbitrarily chosen value (alpha, or the significance level)
    --> H0 is rejected
  • If H0 is rejected incorrectly (i.e. the drug is thought to have effect when it really does not any significant effects)
    --> Type I error
  • Alpha is often set at 0.05
    * 5% probability of making a type I error
  • Type I error is more important because when type I error is made, it meant patients may be taking drugs (with all the accompanying side effects) when the drugs actually have no benefit at all

Type II error

  • If P is not less than alpha
    --> H0 is accepted
  • If H0 is accepted incorrectly (i.e. the drug is thought to have no significant effect when it really does)
    --> Type II error
  • The probability of type II error is termed beta

Summary

  • Type I error (alpha) = H0 rejected incorrectly
  • Type II error (beta) = H0 accepted incorrectly

One-tailed hypothesis vs two-tailed hypothesis

  • Unless it is certain before the trial that the intervention will cause a particular effect
    --> Two-tailed hypothesis should be used (each tail contains 1/2 of alpha)
  • One-tailed hypothesis should only be used if it was clear that the intervention will cause a particular effect BEFORE the study commenced
    * e.g. Two tailed hypothesis to see if drug A has an effect on BP
    * e.g. One-tailed hypothesis to see if drug A has a statistically significant BP-lowering effect

Confidence interval

[MG1:p23]

Also see [Descriptive statistics]

  • A 95% confidence interval gives a 95% probability that the true population parameter will be contained within that interval
  • 95% CI = Sample mean +/- 1.96 x SEM
    * t distribution is used when the sample is small
  • CI is used to:
    * Indicate the precision of any estimate
    * Hypothesis testing
    * Indicate the magnitude of any effect, if any

Hypothesis testing with confidence interval

If the 95% confidence interval for the difference between two means contain zero
--> The chance of the two populations being different is less than 95%

If the 95% CI does not contain zero
--> The chance of the two populations being different is 95% or greater.

Sample size and power calculation

  • Sampling error = Difference between a sample mean and population mean
    --> Decreased by larger sample size
  • Power = the likelihood of detecting a specified difference if it exists
    * = 1-beta

Factors affecting sample size

Sample size depends on

  • Value chosen for alpha (type I error)
    * Smaller alpha requires larger sample size
  • Value chosen for beta (type II error)
    * Smaller beta requires larger sample size
  • Effect size
    * Smaller effect size require larger sample size
    * When the effect size to be detected is decreased by half, the sample size required increases 4 fold
  • Variance in the underlying population
    * Larger variance in the population requires larger sample size
    * Variance is the only variable that an investigator cannot choose
    * Usually estimated from pilot studies or other published data
    * If variance is underestimated, the study may not have enough power to detect a statistically significant difference at the end

 

Parametric and non-parametric tests

  • Parametric tests are based on estimates of parameters
    * Can only be used for data on a numerical scale
    * More appropriate when number is large (n>100)
  • Non-parametric tests do NOT rely on estimation of parameters
    * Generally used to analyse ordinal and categorical data
    * Not as powerful as parametric tests
    * More appropriate when number is small (but not when n<10)
  • Power for the common non-parametric tests is often 95% of the equilvalent parametric tests

Central limit theorem

  • As the sample size increases, the shape of the sampling distribution approaches normal distribution, even if the distribution of the variable in the population is not normal

Permutation tests

  • Permutation tests work out all the possible outcomes with a given sample size
    --> Determines the likelihood of each outcome
    --> Calculates how likely it is to have achieved the given result or one more extreme
  • Permutation tests
    * Make no assumption about the distribution of the underlying population
    * Do not permit generalisation of results from a sample to the population
  • The only permutation test in common use is Fisher's exact test

Bayesian inference

  • P value is a mathetical statement of probability, therefore:
    * P value ignores the magnitude of the treatment effect
    * P value ignores prior knowledge
  • Bayesian inference is developed from Bayes' theorem
  • Bayesian combines the prior probability and the study P value, to calculate the posterior probability
    * i.e. the probability in light of the new study
  • Controversial because the determination of prior probability is ill-defined