The mainstream promote the idea that an economic proposition that is not backed up by some
mathematical expressions is clearly inferior and likely to be wrong.
Suffice to say that the great works of Marx and Keynes, among others would be disregarded if the
inclusion of mathematical squiggles was the demarcation criteria between deficient and sound analysis.
But it is also not correct that MMT economists have avoided formal expressions when they consider
them to be useful in advancing comprehension.
For example, in my 2008 book with Joan Muysken – Full Employment abandoned – there is a lot of
mathematical exposition, where appropriate and effective.
Further, the expression Garbage-In, Garbage-Out applies in this case.
A formal mathematical model is just a logical construct following the rules of mathematics. Whether
it has traction with the real world is another matter all together and that depends, in part, on the
assumptions we start with to ‘set up’ the formal model.
So if we start by assume there is a ‘representative agent’ (representing us all to overcome
intractable aggregation problems) that is always rational and maximising and who follows the formal
dictates of rational expectations (which assume on average the guesses about the future are always
correct) and can ‘solve’ complex intertemporal (across time) maximising problems that require
understanding of the techniques, such as random process, measure theory, Lebesque integrals, Ito
Calculus and the rest, then it is pretty certain, the output from such an exercise will be nonsense.
Hence, the failure to predict the Global Financial Crisis or even see that there was any problem at
The evidence is clearly that people within social systems do not behave remotely like the ‘single
person’ (agent) in the mainstream macroeconomics models.
The introduction of rational expectations into the literature (in the late 1960s but the idea really
gathered pace in the late 1970s) led to mainstream economists talking endlessly about
‘forward-looking maximising behaviour’.
John Muth (1961), who introduced the idea to economists, claimed (p.316) that:
I should like to suggest that expectations, since they are informed predictions of future
events, are essentially the same as the predictions of the relevant economic theory …
[Reference: Muth, J.F. (1961) ‘Rational Expectations and the Theory of Price Movements’,
Econometrica, 29(3), 315-35.]
In other words, when we make guesses about the future, we are assumed to be acting as if we know the
actual data generating process that will deliver that future. We are sometimes wrong but on average
our errors net to zero – which means we have more or less perfect foresight.
William Poole summarised the literature in this way (p.468):
The rational-expectations hypothesis is that the market’s psychological anticipation …
[future price] … equals the true model’s expectation …
[Reference: Poole, W. (1976) ‘Rational Expectations in the Macro Model’, Brookings Papers on
Economic Activity, 2, 463-514.]
The economic modelling task then came down to the following steps:
- Assume – that is, assert without foundation – that all persons are rational and deploy rational
expectations. They interact within efficient, competitive markets (that is, where prices shift to
balance demand and supply to ensure the configuration of outcomes (resource usage) is optimal for all.
- Write some mathematical equations reflecting this.
- Solve the equations for the unknown outcomes.
- Shock the ‘solution’ with some policy change and ‘prove’ it is ineffective because as a result
of (1) all agents predict in advance the shock and act to negate it.
- Write ridiculous articles that claim that fiscal policy is ineffective.
... Arthur Okun (hardly a radical economist) once mused that if the mathematical depiction of
decision making represented by the rational expectations literature was correct then all the
economists on payrolls around the world were redundant because even the person delivering the post
‘knew’ the underlying economic model that generated the empirical observations we call economic data.
While reflecting on the usefulness of rational expectations, James Tobin noted in 1980 that (p.796):
Herbert Simon and others have accumulated considerable evidence to support the hypothesis
that decision makers, from students and consumers to executives and statesmen, use “rules of thumb”
in the face of uncertainties and complexities that defy detailed anaylsis and explicit optimization.
Decision making itself is costly. The rules that simplify decisions are not unalterable, of course,
but they tend to persist unless the environment is perceived to have changed drastically or they
yield disastrous results.
[Reference: Tobin, J. (1980) ‘Are New Classical Models Plausible Enough to Guide Policy?’,
of Money, Credit and Banking, 12(4), 788-799.]
There has been a long-standing tradition of institutional researchers who have understood that
individuals do not behave in the way depicted by these streamlined mathematical frameworks deployed
by economists. The more recent behavioural economics research has ratified the conclusions of those
Tobin had earlier written (1972, p.13):
Lucas’ paper provides a rigorous defense of the natural rate hypothesis, and the study’s
rigor and sophistication have the virtue of making clear exactly what the hypothesis requires. The
structure of the economy, including the rules guiding fiscal and monetary policy, must be stable and
must be understood by all participants. The participants not only must receive the correct
information about the structure but also must use all of the data correctly in estimating prices and
in making quantity decisions. These participants must be better econometricians than any of us at
the Conference. If they are, they will always be – except unavoidable mistakes due to purely random
elements in the time sequence of aggregate money demand – at their utility- and profit-maximizing
The was a touch of humour here but the point he was making was obvious. The sort of requirements
that these mathematical models that mainstream economists deploy place such unrealistic demands on
human reasoning that they are of little use in understanding what actually goes on in the real world.
[Reference: Tobin, J. (1972) ‘The Wage-Price Mechanism: Overview of the Conference’, in
(ed.) The Econometrics of Price Determination, Board of Governors of the Federal Reserve System and
Social Science Research Council, Washington, 5-15.]
But research communities that become crippled by the onset of Groupthink avoid these intersections
Exactly. I call it paralysis by analysis. Falling in love with data at the expense of the
individuals behind the data. Numbers are only as good as their interpretation, and numbers are
easily misinterpreted if you have misconceptions or biases about the underlying phenomenom because
humans innately see what they want to see.