In the realm of traditional statistics, we often act as if we are starting from zero with every new experiment. However, Decision Theory and Bayesian Inference challenge this notion by asking a simple, powerful question: What if we already know something? This unit is the study of how to make rational choices under uncertainty by combining our prior beliefs with new, incoming data. For students, it is a fascinating shift from “frequentist” plug-and-play math to a more philosophical, “learning” approach to probability.

Below is the exam paper download link

PDF Past Paper On Decision Theory And Bayesian Inference I For Revision

Above is the exam paper download link

To help you master the art of logical choice and posterior distributions, we have synthesized the most common examination hurdles into a clear Q&A revision guide.

What is the core difference between Frequentist and Bayesian Inference?

A Frequentist sees probability as the long-run frequency of an event—if you flip a coin a million times, what happens? A Bayesian, however, sees probability as a “degree of belief.” The most significant difference is that Bayesians treat parameters (like a population mean) as random variables with their own distributions, whereas Frequentists treat them as fixed, unknown constants. This allows Bayesians to update their knowledge as new evidence surfaces.

How do we define ‘Bayes’ Theorem’ in a practical context?

Bayes’ Theorem is the engine of this entire subject. It is the mathematical formula used to update a Prior Distribution (what we believed before the data) into a Posterior Distribution (what we believe after the data) using a Likelihood Function (the evidence from our sample). In an exam, you will often hear this expressed as:

“The Posterior is proportional to the Prior times the Likelihood.”

What is a ‘Conjugate Prior’ and why is it useful?

In the middle of a high-pressure exam, the last thing you want to do is solve a complex, multi-dimensional integral. A Conjugate Prior is a specific type of prior distribution that, when combined with a specific likelihood, results in a posterior distribution that is in the same “family.” For example, if you have a Beta prior and a Binomial likelihood, your posterior will also be a Beta distribution. This mathematical shortcut is a frequent topic in “Decision Theory and Bayesian Inference I” past papers.


What are the main ‘Decision Criteria’ under uncertainty?

When you have to make a choice without knowing exactly what the future holds (States of Nature), you use different criteria based on your “risk appetite”:

  1. Maximin: The “pessimist’s choice”—you look at the worst-case scenario for every decision and pick the one with the best “worst” outcome.

  2. Maximax: The “optimist’s choice”—you pick the decision that could lead to the highest possible profit, ignoring the risks.

  3. Minimax Regret: You focus on “opportunity loss,” choosing the path that minimizes the amount of “regret” you would feel for not picking the best possible option after the fact.

What is a ‘Loss Function’?

A Loss Function $L(\theta, a)$ quantifies the “penalty” or cost of making an estimate or decision ($a$) when the true state of the world is $\theta$. Common types include the Squared Error Loss (which leads to the posterior mean as the best estimate) and the Absolute Error Loss (which leads to the posterior median). Understanding which loss function to use is critical for solving “Bayes Risk” problems.

How does ‘Decision Tree’ analysis work?

A Decision Tree is a visual map of the choices, uncertainties, and payoffs involved in a problem. You start from the right (the final outcomes) and “fold back” the tree to the left by calculating the Expected Monetary Value (EMV) at each chance node. This allows a decision-maker to see the statistically “optimal” path before they ever spend a shilling.

PDF Past Paper On Decision Theory And Bayesian Inference I For Revision


Conclusion

“Decision Theory and Bayesian Inference I” is a unit that rewards those who can think several steps ahead. It’s about more than just numbers; it’s about the logic of rational behavior. By practicing with past papers, you get used to the specific phrasing of “prior information” and learn to build decision matrices that can withstand any state of nature.

To sharpen your Bayesian logic and prepare for your upcoming finals, use the comprehensive resource linked below.

Last updated on: March 24, 2026