If we add up the values that a random variable can take, each multiplied by its corresponding probability, we get the expected value (or mean) of the random variable. This is essentially a weighted average of all possible outcomes, where the weights are the probabilities of those outcomes. Mathematically, for a discrete random variable XXX with possible values xix_ixi and probabilities P(X=xi)P(X=x_i)P(X=xi), the expected value E[X]E[X]E[X] is computed as:
E[X]=∑ixi⋅P(X=xi)E[X]=\sum_i x_i\cdot P(X=x_i)E[X]=i∑xi⋅P(X=xi)
This sum takes each value the variable can assume, multiplies it by how likely that value is, and then adds all these products together. For example, if a random variable XXX can take values 0, 1, 2, 3, and 4 with probabilities 0.1, 0.15, 0.4, 0.25, and 0.1 respectively, the expected value is:
0×0.1+1×0.15+2×0.4+3×0.25+4×0.1=2.10\times 0.1+1\times 0.15+2\times 0.4+3\times 0.25+4\times 0.1=2.10×0.1+1×0.15+2×0.4+3×0.25+4×0.1=2.1
This means that, on average, the value of XXX is 2.1, even though XXX itself can only take integer values. The expected value represents a long-term average over many repetitions of the experiment, not necessarily a value XXX will take in any single trial
. In summary, adding up the values assigned to a random variable weighted by their probabilities yields the expected value, which is a measure of the central tendency or average outcome of the random variable.