From their abstract (slightly edited):
We assume that perfectly patient agents estimate the value of future events by generating noisy, unbiased simulations and combining those signals with priors to form posteriors. These posterior expectations exhibit as-if discounting: agents make choices as if they were maximizing a stream of known utils weighted by a discount function. This as-if discount function reflects the fact that estimated utils are a combination of signals and priors, so average expectations are optimally shaded toward the mean of the prior distribution, generating behavior that partially mimics the properties of classical time preferences. When the simulation noise has variance that is linear in the event's horizon, the as-if discount function is hyperbolic.Among other things, then, they provide a rational foundation for the "myopia" associated with hyperbolic discounting.
Note that in the Gabaix-Laibson environment everything depends on how forecast error variance behaves as a function of forecast horizon \(h\). But we know a lot about that. For example, in linear covariance-stationary \(I(0)\) environments, optimal forecast error variance grows with \(h\) at a decreasing rate, approaching the unconditional variance from below. Hence it cannot grow linearly with \(h\), which is what produces hyperbolic as-if discounting. In contrast, in non-stationary \(I(1)\) environments, optimal forecast error variance does eventually grow linearly with \(h\). In a random walk, for example, \(h\)-step-ahead optimal forecast error variance is just \(h \sigma^2\), where \( \sigma^2\) is the innovation variance. It would be fascinating to put people in \(I(1)\) vs. \(I(0)\) laboratory environments and see if hyperbolic as-if discounting arises in \(I(1)\) cases but not in \(I(0)\) cases.