We consider the fundamental problem of estimating the difference between the exact value T and approximations $A_h$ that depend on a single real parameter h. It is well-known that if the error $E_h = T − A_h$ satisfies an asymptotic expansion, then we can use Richardson extrapolation to approximate $E_h$ . In this paper, our primary concern is the accuracy of Richardson’s error estimate $R_h$, i.e., the size of the relative error $(E_h − R_h )/E_h$. In practice, the computed value $Â_h$ is different from the exact value $A_h$. We show how to determine when the computational error $A_h − Â_h$ is irrelevant and how to estimate the accuracy of Richardson’s error estimate interms of Richardson’s fraction $F_h$. We establish monotone convergence theorems and derive upper and lowerbounds for $T$ in terms of $A_h$ and $R_h$. We classify asymptotic error expansions according to their practicalvalue rather than the order of the primary error term. We present a sequence of numerical experiments that illustrate the theory. Weierstrass’s function is used to define a sequence of smooth problems for which it is impractical to apply Richardson’s techniques.