Skip to navigation – Site map

HomeNuméros4-3History of EconometricsReflections on the LSE Tradition ...

History of Econometrics

Reflections on the LSE Tradition in Econometrics: a Student’s Perspective

Réflexions sur la tradition de la LSE en économétrie : Le point de vue d’un étudiant
Aris Spanos
p. 343-380

Abstracts

Since the mid 1960s the LSE tradition, led initially by Denis Sargan and later by David Hendry, has contributed several innovative techniques and modeling strategies to applied econometrics. A key feature of the LSE tradition has been its striving to strike a balance between the theory-oriented perspective of textbook econometrics and the ARIMA data-oriented perspective of time series analysis. The primary aim of this article is to provide a student’s perspective on this tradition. It is argued that its key contributions and its main call to take the data more seriously can be formally justified on sound philosophical grounds and provide a coherent framework for empirical modeling in economics. Its full impact on applied econometrics will take time to unfold, but the pervasiveness of its main message is clear: statistical models that account for the regularities in data can enhance the reliability of inference and policy analysis, and guide the search for better economic theories by demarcating ‘what there is to explain’.

Top of page

Full text

1 The LSE Tradition in Econometrics

  • 1 This dating is based on the presence of the main protagonist, Denis Sargan, as a faculty member at (...)

1The term ‘LSE tradition in econometrics’ is used to describe a particular perspective on econometric modeling developed by a group of econometricians associated with the London School of Economics (LSE) during the period 1963-19841.

2The primary aim of the paper is to undertake a retrospective appraisal of this tradition viewed in the broader context of the post-Cowles Commission developments in econometric modeling. The viewing angle is one of a student of that tradition who was motivated by the aspiration to facilitate its framing into a coherent methodological framework. A framework in the context of which the key practices of that tradition could be formally justified on sound statistical and philosophical grounds.

3The research agenda of the LSE tradition was influenced primarily by the experience of the protagonists like Sargan and Hendry in modeling time series data. This experience taught them that estimating a static theoretical relationship using time series data would often give rise to a statistically misspecified model. This is because most of the assumptions of the implicit statistical model, usually the Linear Regression model, are likely to be invalid, undermining the reliability of any inference based on such a model; see Sargan (1964), Hendry (1977).

4The LSE econometricians knew that the theory-oriented perspective dominating econometrics journals at the time would publish applied papers only when the author could demonstrate that the estimated model is meaningful in terms of a particular theory. This presented them with a dilemma: follow the traditional curve-fitting approach of foisting the theorymodel on the data, and ‘correcting’ the error term assumptions if you must, or find alternative ways to relate dynamic models with lags and trends to static theories. This, more than any other issue, gave the LSE tradition its different perspective and guided a large part of its research agenda.

1.1 A Student’s Viewing Angle

5I completed all my degrees (B.Sc., M.Sc. and Ph.D) at the LSE during the period 1973-1982, where I followed a programme called "Mathematical Economics and Econometrics", both at the undergraduate and graduate levels. I studied undergraduate statistics and econometrics with Ken Wallis, Alan Stuart, Jim Durbin, David Hendry and Grayham Mizon, and graduate time series analysis and econometrics with Jim Durbin and Denis Sargan. In addition to several courses in mathematical economics at different levels taught by Steve Nickell and Takashi Negishi, most crucial for my education at the LSE were a number of courses from the mathematics department, most of them taught by Ken Binmore, including analysis, set theory and logic, mathematical optimization, linear algebra, game theory, measure theory and functional analysis, which I attended with the encouragement of the economics department. The person who guided me toward econometrics during the final year of my undergraduate studies was Terence (W.M.) Gorman. After a short preamble on ‘how a mathematical economist is likely to be up in the clouds for the entirety of one’s career, in contrast to an econometrician who is forced to keep one leg on the ground due to dealing with real data’, I agreed to meet with Denis Sargan and re-focus my graduate studies more toward econometrics rather than mathematical economics.

6As a full-time student during the period 1973-1979, it gradually became clear to me that there was something exceptional about studying econometrics at the LSE. This ‘special’ atmosphere would become manifest during the lively econometric seminars given by faculty and students, as well as a number of visitors, including Ted Anderson, Peter Phillips, Clive Granger, Rob Engle, Tom Rothenberg and Jean-François Richard. I became aware of the difference between what I was being taught as ‘textbook econometrics’ and the LSE oral tradition in applied econometrics when I worked for Gorman during the summers of my 2nd and 3rd years as an undergraduate (1974-5) and I had the opportunity to interact with faculty members like Steve Putney, Tony Shorrocks and Meghnad Desai. I became so eager to find out more that as an undergraduate I decided to attend (informally) Durbin’s graduate course on Time Series, which I thoroughly enjoyed. I had to retake the course formally as a M.Sc. student, but I welcomed the opportunity to learn more from that enlightening course.

7For my Ph.D. I had David Hendry as my main advisor, and Denis Sargan as a secondary advisor when Hendry was on leave, which was quite often during the period 1980-1981. A crucial part of my thesis on "Latent Variables in Dynamic Econometric Models", was to bring out the key differences between the LSE and the textbook econometrics traditions as they pertain to two broad methodological problems:

[A] How to bridge the gap between theory and data in a way that avoidsattributing to the data the subordinate role of quantifying theories presumed true.

[B] How to address the twin problems of model validation and modelselection using coherent strategies that account for the probabilistic structure of the data.

8I began my academic career at Birkbeck College (another college of London University) in the autumn of 1979. After a disillusioning attempt to teach textbook econometrics at a graduate level jointly with Tom Cooley and Ron Smith in 1979-80, I decided to undertake a recasting of textbook econometrics in the spirit of the LSE tradition by writing extensive lecture notes. These notes provided the basis of my graduate econometrics course at Birkbeck College for several years. The students used to complain (with good justification) that the corresponding econometrics course at the LSE, based on the textbook approach, was considerably less challenging, both at the technical and conceptual levels, than the course they had to endure with me. With a lot of encouragement from John Muellbauer, these lecture notes were eventually published in Spanos (1986), with a foreword by Hendry. It was the first ‘unofficial’ textbook inspired by the LSE tradition during its formative phase.

9My initial aim was to justify the LSE tradition’s use of models with trends and lags in an obvious attempt to account for the statistical information in time series data on sound statistical and philosophical grounds. I immersed myself in philosophy of science by focusing primarily on the LSE philosophers, Popper and Lakatos and the related literature, in an attempt to find some answers, but to no avail. The problem with the philosophical accounts of confirmation and falsification is that they take trustworthy evidence e and testable claims h as readily available (Chalmers, 1999). In empirical modeling, however, the real problem is how to construct e and h.

10After reflecting on these issues I concluded that the best way to construct e and h, with a view to bridge the gap between theory and data, was to devise a sequence of interconnecting models: theory (that might include latent variables), structural (estimable theory model), statistical (accounting for the regularities in the data) and empirical models (the blending of substantive and statistical information); see Spanos (1986, 1988). Shortly afterwards, I discovered that Haavelmo (1944) expressed similar ideas that went largely unnoticed by the subsequent econometric literature.

2 The Historical Context

2.1 Revisiting Haavelmo’s Neglected Insights

11Trygve Haavelmo was a Norwegian econometrician held in high esteem by the LSE econometricians because of his key contributions in puzzling out the ‘simultaneity bias’ problem and framing the Simultaneous Equations Model (SEM) in a way that largely shaped the Cowles Commission research agenda during the 1940s. I had studied Haavelmo (1943, 1947), but I was unaware of Haavelmo (1944) upon which I stumbled while going through the early volumes of Econometrica. The effect of that monograph on me was stunningly revelatory. Haavelmo (1944) articulated very clearly most of the problems I was grappling with, and his monograph provided many valuable insights on how to address them; see Spanos (1989; 1995; 2014).

[i] His distinction between ‘theoretical’, ‘observational’, ‘true’ variablesand the observed data was most perceptive, and his discussion on how one might be able to bridge the gap between theory and data by contrasting ‘artificial isolation designs’ and those of ‘Nature’ was awe inspiring. He warned practitioners against assuming that the variables envisaged by a theory always coincide with particular data series, and encouraged them to pose certain key questions (1944, 16):

(a) Are most of the theories we construct in "rational economics" one forwhich historical data and passive observations are not adequate experiments? This question is connected with the following:

(b) Do we try to construct theories describing what individuals, firms, etc.actually do in the course of events, or do we construct theories describing schedules of alternatives at a given moment? If the latter is the case, what bearing do such schedules of alternatives have upon a series of decisions and actions actually carried out?

12Haavelmo (1944, 7) articulated most perceptively the answer to the dilemma faced by the LSE tradition, by arguing that, in the case of observational data:

... [one] is presented with some results which, so to speak, Nature has produced in all their complexity, his task being to build models that explain what has been observed.

[ii] His embracing of the Fisher, Neyman-Pearson approach to statistical inference, led him to argue convincingly in favor of employing parsimonious parametric statistical models (74-75) in learning from data about phenomena of interest. This is in contrast to the curve-fitting perspective adopted by the textbook tradition; see Spanos (2014). Haavelmo (1943) warned against the perils of ‘curve-fitting’ by attaching random error terms to deterministic theory models:

Without further specification of the model, this procedure has no foundation ... First, the notion that one can operate with some vague idea about "small errors" without introducing the concepts of stochastical variables and probability distributions, is, I think, based upon an illusion. (Haavelmo, 1943, 5)

13His recommended alternative strategy on page 7:

to avoid inconsistencies,..., all formulae for estimating the parameters involved should be derived on the basis of this joint probability law of all the observable variables involved in the system. (This I think is obvious to statisticians, but it is overlooked in the work of most economists who construct dynamic models to be fitted to the data.)

In this sense, Haavelmo foreshadowed the emphasis by the LSE tradition on modeling the observable process Image 10000201000000D800000019CEF29F1F76299467.png underlying data Image 100002010000001B00000019896CE6C5B08798E7.png , by advocating a probabilistic foundation for inference based on the joint distribution Image 10000201000000B90000001928F1DF34CB1C1E7E.png , as well as the crucial role of assessing the validity of models before drawing any inferences. Indeed, the focus on Image 10000201000000BC000000199C6257B5CABE0772.png in conjunction with his SEM framing, provided the key on how to disentangle the statistical from the structural model.

14[iii] Haavelmo (1940, 1958) warned practitioners that accounting for the regularities in the data (statistical adequacy) is not equivalent to either (a) the model ‘fits the data well’ or (b) the model can simulate ‘realistic-looking data’: "It has become almost too easy to start with hard-boiled and oversimplified "exact" theories, supply them with a few random elements, and come out with models capable of producing realistic-looking data.” (Haavelmo, 1958, 354)

15The above methodological insights from Haavelmo will be used in section 5 to address some of the key methodological problems raised by the LSE tradition.

2.2 The Origins of the LSE Tradition

16The origins of the LSE tradition go back to the early 1960s when Jim Durbin (statistics department) and Bill Phillips, of the Phillips curve (1958) fame (economics department) played a crucial role in creating the econometrics group at the LSE with two key appointments. As described by Durbin (Phillips, 1988, 135):

Bill Phillips and I cooperated in getting two new posts at the readership level at the school: one in the economics department and one in the statistics department, both in econometrics. Rex Bergstrom took the post in the economics department for a time and we persuaded Denis Sargan to come from Leeds to the post in the statistics department.

17Gilbert (1989, 127-128) describes the initial development of econometrics at the LSE:

The reason econometrics developed at LSE and not elsewhere in Britain was because of the close links between the Economics Department and an independent but economics-oriented Statistics Department, links which did not exist elsewhere. These close links promoted a fertilization process in which the LSE econometricians took over elements from time series analysis. The intellectual problem was how to benefit from the data-instigated time series approach to specification (identification) while at the same time being able to make structural inferences in the Cowles tradition.

18Indeed, one can make a strong case that the LSE group sought to find common ground between the ARIMA-type modeling of time series (promoted by statisticians), and the simultaneous equations modeling (favored by traditional econometricians), with a view to reconcile the two perspectives; see Spanos (2010a).

19Phillips (1988, 125), described Jim Durbin’s role in the ‘LSE tradition’:

By the 1960s it was apparent to many that the LSE was the place where it was all happening in econometrics, not only in research but also in teaching programs. Indeed, successive waves of students graduated with a special LSE pedigree that stood for the best in econometric training combined with a special interest and understanding of statistical time series. This combination has endured to the present and one of Jim’s distinct legacies to the LSE has been the establishment and continuity of this intellectual tradition.

20It is particularly interesting that both authors bring out the same two factors as being instrumental for the development of the LSE tradition in econometrics.

21The first was the close collaboration between the statistics and economics departments at the LSE in fostering times series modeling and econometric theory, with Durbin and Sargan the protagonists; see Sargan (2003). It is worth noting that Durbin and Sargan were contemporaries at St. John’s College, Cambridge, following similar undergraduate courses in mathematics (Phillips, 1988).

22The second contributing factor was the concerted effort to reconcile the experience of these protagonists in modeling time series data with the theoryoriented post-Cowles tradition in econometrics. The first factor is discussed next.

2.3 Synergies Between Statistics and Economics

23During the 1950s the statistics department at the LSE had a strong group of statisticians including Durbin, Kendall, Stuart and Quenouille, who were also interested in time series modeling and econometrics. Kendall and Stuart (1969, 1973,1968) was the three volume magnus opus for advanced level statistics courses. Kendall (1953) was a highly influential paper on the statistical modeling of speculative prices; see Andreou et al (2001). Quenouille (1957) was the first monograph to provide a coherent statistical treatment of Vector Autoregressive (VAR) models as well as ARMA(p,q) models. In the early 1960s Durbin introduced a course on ‘Advanced Statistical Methods for Econometrics’ that began a new era for econometrics at the LSE; see Phillips (1988).

24Denis Sargan began his career at the LSE as a Reader of econometrics in the statistics department in 1963 and took over the teaching of graduate econometrics courses. He became a professor of econometrics in the economics department in 1964. After that Durbin focused his graduate teaching in a time series course which was amazingly attuned to the latest developments in that field. Almost immediately upon its publication, Durbin recognized the path-breaking potential of Box-Jenkins (1970) "Time Series Analysis", and gave it center stage in his graduate time series course.

25He strongly emphasized the iterative nature of the Box-Jenkins time-series modeling strategy involving several stages (identification, estimation, diagnostic checking, forecasting) with special emphasis on graphical techniques and diagnostic checking. Durbin was a strong advocate of diagnostic checking that evolved into the current Mis-Specification (M-S) testing. After proposing the first such test for autocorrelation (Durbin and Watson, 1950), he put forward several additional M-S tests (Durbin, 1975), including tests for the constancy of the parameters using recursive least-squares (Brown, Durbin and Evans, 1975). Indeed, he was a strong advocate of thorough M-S testing for model validation purposes:

I’ve always thought it was really quite important to carry out diagnostic tests. Certainly in econometric applications and other applications of regression analysis to time series data, I think it is important to check out whether the assumptions on which inference is based are satisfied. (Phillips, 1988, 132-133)

26He went on to elaborate on the crucial importance of M-S testing:

In the many fields of interest to me such as time series and applications in econometrics and the social sciences, one now has the possibility of calculating a large number of different diagnostic test statistics. Of course, I have a special interest in tests of autocorrelation, but one thinks of tests of normality, one thinks of tests of heteroskedasticity, and so on. ... And if we find these assumptions are invalid we can make modifications and then do some more diagnostic tests. (Phillips, 1988, 151)

27Indeed, he dismissed the charge against M-S testing known as ‘multiple hypotheses’, calling it ‘theoretical’ in a derogatory sense: “I think it’s quite right and proper for an applied worker to look at a wide variety of diagnostic tests and, especially, I like the idea of graphical procedures.” (Phillips, 1988, 151)

28The close collaboration between the statistics and economics departments at the LSE was particularly crucial for reconciling the theory-oriented perspective of the Cowles Commission with the data-oriented perspective of time series modeling. The high level grounding in statistical theory for the M.Sc. in Econometrics contributed significantly to the effort because it gave the students the necessary background and enough confidence to pursue this reconciliation both at a technical as well as a methodological level. The programme "Mathematical Economics and Econometrics" at both undergraduate and graduate levels, was jointly taught by the economics and statistics departments, with several key courses taught by the mathematics department. Indeed, the lines between the two departments were so blurred that as an undergraduate I didn’t realize that Ken Wallis and Grayham Mizon, two well-known econometricians, were faculty members of the statistics department. Peter Robinson, who succeeded Denis Sargan in the Tooke Chair, did his M.Sc. in the Statistics department.

29The main textbooks in statistics during my undergraduate studies were Kendall and Stuart (1969, 1973, 1968), Cox and Hinkley (1974) and Rao (1973). For Durbin’s graduate course on ‘Time Series’ the main textbooks were Hannan (1970) and Anderson (1971), with occasional references to Box and Jenkins (1970). For Sargan’s ‘Advanced Econometric Theory’ course the main statistics textbook was Cramer (1946). My initial thoughts about the book being out-of-date turned out to be completely unfounded; an early lesson in appreciation of Sargan’s wisdom on statistical issues.

30An issue related to the high level grounding in statistical theory aimed at by the LSE courses pertains to Sargan as a teacher. According to Hendry (Ericsson, 2004, 748-749):

Denis was always charming and patient, but he never understood the knowledge gap between himself and his students. He answered questions about five levels above the target, and he knew the material so well that he rarely used lecture notes. I once saw him in the coffee bar scribbling down a few notes on the back of an envelope—they constituted his entire lecture. Also, while the material was brilliant, the notation changed several times in the course of the lecture: Image 100002010000001000000019B321EE7C81808889.pngbecame Image 100002010000000F000000191EE407EB15F0F5B6.png , then Image 100002010000001000000019EB2170939AE93EE2.png , and back to Image 100002010000001000000019B321EE7C81808889.png , while Image 100002010000001000000019EB2170939AE93EE2.png had become Image 100002010000001000000019B321EE7C81808889.png and then Image 100002010000000F000000191EE407EB15F0F5B6.png ; and Image 100002010000000E000000198BF5CFE35DFA197F.png and Image 100002010000000D000000192088F87CAB57CD87.png got swapped as well.

31In light of that, how did Sargan advise so many distinguished econometricians (Maasoumi, 1988a)? Hendry went on to answer that question: “Sorting out one’s notes proved invaluable, however, and eventually ensured comprehension of Denis’s lectures.” (Ericsson, 2004, 749)

32The selected group of 12-15 M.Sc. students attending that course viewed the deciphering of Sargan’s lecture notes as a personal challenge, and they had (or could acquire from other LSE courses) the technical background needed to do just that. It is no coincidence that Hendry was able to publish several technical papers within half a dozen years of arriving at the LSE in 1967 without any background in statistical theory; see Ericsson (2004).

3 Textbook Econometrics

33The key difference between the mainstream post-Cowles and the LSE perspectives stems primarily from their view of the role of theory and data in empirical modeling.

3.1 Pre-Eminence of Economic Theory

34The ‘pre-eminence of theory’ perspective, dominating economic modeling since Ricardo (1817), attributes to data the subordinate role of ‘quantifying theories presumed true’. In this conception, data do not so much test as facilitate the instantiation of theories. Econometric methods offer sophisticated ways ‘to bring data into line’ with a particular theory. Since the theory has little chance to be falsified, such instantiations do not constitute genuine tests of the theory as such; see Spanos (2010a).

35Cairnes (1888, 72-94), articulated the most extreme version of the ‘preeminence of theory’ by pronouncing data irrelevant for appraising economic theories. His argument in a nutshell was that economic theories are far superior to those of physics because the premises of economic theories are deductive in nature. They are derived from ‘self-evident truths’ established by introspection via direct access to the ultimate causes of economic phenomena, rendering them infallible. In contrast, the premises of Newtonian Mechanics are based on mere inductive generalizations based on experimentation and inductive inferences, which are known to be fallible.

36Robbins, a leading professor at the LSE during the period 1930-1965 (see Sargan, 2003), articulated an almost identical view:

In Economics, ..., the ultimate constituents of our fundamental generalizations are known to us by immediate acquaintance. In the natural sciences they are known only inferentially. There is much less reason to doubt the counterpart in reality of the assumption of individual preferences than that of the assumption of the electron. (Robbins, 1935, 105)

37Indeed, Robbins (1935) dismissed the application of statistics to theory appraisal in economics claiming that such techniques are only applicable to data which can be considered as ‘random samples’. Since there were no such data in economics, statistical analysis of economic data had no role to play in theory assessment. Robbins (1971, 149) recanted these claims describing them as: “exaggerated reactions to the claims of institutionalists and ‘crude’ econometricians like Beveridge”.

38The current version of this perspective sounds almost as extreme:

Unlike the system-of-equations approach, the model economy which better fits the data is not the one used. Rather currently established theory dictates which one is used. (Kydland and Prescott, 1991, 174)

3.2 The Framing of Textbook Econometrics

39The prevailing view among applied economists in the early 1930s was that the statistical methods associated with Fisher-Neyman-Pearson, although applicable to experimental data, are inapplicable to economic data because: (i) the ‘experimental method’ is inappropriate for studying economic phenomena, (ii) there is always an unlimited number of potential factors influencing economic phenomena—hence the invocation of ceteris paribus clauses—, (iii) economic phenomena are intrinsically heterogeneous (spatial and temporal variability), and (iv) economic data are vitiated with errors of measurement; see Frisch (1934). Hence, Frisch rejected the Fisher-NeymanPearson approach to statistical inference and proposed his confluence analysis as an alternative method that could account for the perceived features (i)-(iv).

40This standpoint led to a different approach to statistical modeling and inference which was based on the Gauss-Laplace curve-fitting perspective in conjunction with Quetelet’s scheme based on:

(C1) a systematic (deterministic) component (constant causes) determined by substantive information, and

(C2) a random part which represents the non-systematic error (accidental causes) component (see Desrosières, 1998).

41Econometric modeling, with a theory-oriented structural modeling providing the premises for statistical inference, was initiated in the 1940s and was formalized by the Cowles Commission (see Koopmans, 1950, Hood and Koopmans, 1953) into the Simultaneous Equations Model (SEM); see Morgan (1990), Qin (1993). Modern econometrics, however, was initially framed in the early 1960s by two highly influential textbooks, Johnston (1963) and Goldberger (1964). They successfully demarcated the intended scope of modern econometrics for the next half century and beyond. Their success is largely due to two crucial factors.

42The first was their embrace of the Pre-Eminence of Theory perspective on empirical modeling. This perspective had great appeal to economic theorists because it gave econometrics an instrumental role with very narrow scope. They achieved this by adopting the curve-fitting perspective where a deterministic theory-model is transformed into a statistical model by attaching white-noise error term(s) that often represent errors of measurement, errors of approximation, omitted effects or stochastic shocks; see Marshack (1953, 12), and Johnston (1963, 5-7).

43Pagan (1984, 103) offered a succinct description of the textbook approach as follows:

Four steps almost completely describe it: a model is postulated, data gathered, a regression run, some t-statistics or simulation performance provided and another ‘empirical regularity’ was forged.

44Although this reads like a caricature, it is very similar to the description offered by Johnston (1972, 6).

45The second reason for the success of textbook econometrics was its demystifying of the Cowles SEM by presenting a system of interdependent equations as a natural extension of the Linear Regression model. The blueprint of this textbook econometrics tradition was simple and coherent. The Linear Regression model:

Image 100002010000028B0000001DE189ABCFC607A5C6.png

in conjunction with the Gauss-Markov theorem, became the cornerstone of econometric theory. The latter provided the key assumptions for the error term:

Image 100002010000028B000000656935907738C56C79.png

that would yield Best Linear Unbiased Estimators (BLUE) of (β01). Moreover, all other models of interest in econometrics could then be viewed as variations/extensions of this basic recipe. The rest of this textbook econometrics blueprint is a sequence of chapters that discuss inference methods relating to departures from the above assumptions (i)-(iv). These chapters are given titles indicating that departures from assumptions (i)-(iv) are viewed as ‘problems’ to be fixed.

46The same blueprint, but with unremittingly accumulating additional material, has dominated all the traditional textbooks in econometrics to this day; see Johnston (1972), Theil (1971), Maddala (1977), Judge et al (1985), Greene (2011) inter alia.

3.3 Textbook Econometrics and the DAE

Part of the success of this textbook blueprint was due to the fact that the above simple Linear Regression model could be extended to the Image 100002010000003E000000195BDD5266EA244296.png regressors case by using a carefully designed matrix notation that made the extension seem intuitive and straight forward. That notation was provided by the Department of Applied Economics (DAE) at Cambridge, England, under the directorship of Richard Stone; see Gilbert (1991). In 1948 Stone offered a job to Durbin upon completion of his diploma in mathematical statistics. The papers by Durbin and Watson (1950, 1951) influenced the framing of textbook econometrics in several crucial respects.

47The first major influence was the pertinent matrix notation for the Linear Regression model:

Image 100002010000028B000000634ABD9FC21AC7F531.png

The conditioning on Image 100002010000003A000000199F9EF85D1C50E986.png was added to render more general the original ‘fixed in repeated samples’ assumption; see Goldberger (1964). The role of notation in rendering certain procedures seem intuitive is often undervalued in science, but it was critical for the success of textbook econometrics. A strong case can be made that despite the fact that Malinvaud (1966) was a more esteemed textbook at both the technical and conceptual levels, its overall influence on econometrics was considerably less than that of Johnston (1963) and Goldberger (1964).

48The second key influence on textbook econometrics by the Durbin-Watson papers was to provide the initial articulation of the Gauss-Markov theorem (1950, 410):

If in addition, Image 1000020100000086000000199B71A5B87B6061FF.png can be taken to be distributed independently of each other with constant variance, then by Markov’s theorem2 the least squares estimates of Image 100002010000008D00000019359ABE4269E69D10.png are best linear unbiased estimates whatever the form of the distribution of the Image 100002010000002F00000019481482B08F1373B9.png

49The third key influence was the way Durbin-Watson (1950) framed the textbook econometrics perspective on M-S testing, using the non-correlation assumption in [ii]. Their contribution can be described in the following two steps.

50Step 1. They postulated an Autocorrelation-Corrected Regression:

Image 100002010000028B00000045B78B366FEBDBEF0D.png

that parametrically nests the original Linear Regression model (1). This involved particularizing the generic departure from independence:

Image 100002010000028B0000002A978D8E0074785747.png

in the form of the AR(1) model, i.e. (3) has been particularized to:

Image 100002010000028B00000034166D9C959C9B8E4B.png

Note that this particularization has reduced the unknown parameter of Image 10000201000000220000001942137F3C86FE6AB0.png from Image 10000201000000540000001950CF87303A1FF134.png and increasing with Image 10000201000000100000001913FB87E8D7155239.png , to just one Image 100002010000000E0000001918289359D552B795.png .

51Step 2. Testing independence is now parameterized in (2) using the hypotheses:

Image 100002010000028B00000019F315C2949CFA22B7.png

In terms of the OLS residuals Image 100002010000006A000000190122EDC18F44E849.png , where Image 100002010000009A000000197882D7EB0FE9AEE8.png is the OLS estimator, the D-W test for (5) is defined by:

Image 100002010000028B0000003A3F6E59FF1E397261.png

When the observed test statistic Image 100002010000005E0000001983EE3E014629CF3E.png is smaller (bigger) than the lower (upper) bound Image 10000201000000BF0000001941C6026A9D071D9F.png is rejected.

52What is especially remarkable, and worth bringing out, is that Durbin and Watson (1950, 409) did not recommend a respecification strategy, by declaring that:

“We shall not be concerned in either paper with the question of what should be done if the test gives an unfavorable result.”

53A third influential paper written by the DAE group, Cochrane and Orcutt (1949), provided the answer to this respecification question for textbook econometrics.

Step 3. When the D-W test rejects H0 adopt H1. That is, replace the original model (1) with the alternative model (2) . This respecification is traditionally presented as replacing Image 100002010000000D00000019996DF7625B9F6FC0.png , which is inefficient under [ii]*, with the relatively more efficient GLS estimator Image 10000201000000F800000019EC84789F973409D6.png (Greene, 2011). The justification stems from its affinity to the Pre-Eminence of Theory perspective because it retains the original theory-model and ‘fixes’ assumption [ii] of the error term.

54This form of ‘error-fixing’, i.e. adopting the particular alternative in a M-S test, has been extended to other assumptions, including homoskedasticity and linearity; see Greene (2011). In section 5 it is argued that this respecification strategy is fallacious and invariably leads to unreliable inferences.

4 The Framing of the LSE Tradition

55The key differences between the LSE and textbook traditions were primarily methodological. The protagonists were sceptical about the pertinence of the Pre-Eminence of Theory perspective because they knew first hand that ‘quantifying theoretical models presumed true’ doesn’t work in practice; see Mizon (1995a). Sargan (1957) criticized the simplistic way of bridging the gap between theory and data, encouraged paying particular attention to the nature of the information in the data (cross-section vs. time series), and warned against treating the choice of the data as an afterthought.

56In their attempt to avoid both extreme practices, the Pre-Eminence of Theory modeling perspective on one hand, and the data-driven ARIMA modeling, on the other, the LSE tradition set out a ‘third way’, aspiring to account for the regularities in data without ignoring pertinent theory information. As argued by Hendry (2009, 56-57):

This implication is not a tract for mindless modeling of data in the absence of economic analysis, but instead suggests formulating more general initial models that embed the available economic theory as a special case, consistent with our knowledge of the institutional framework, historical record, and the data properties. ... Applied econometrics cannot be conducted without an economic theoretical framework to guide its endeavours and help interpret its findings. Nevertheless, since economic theory is not complete, correct, and immutable, and never will be, one also cannot justify an insistence on deriving empirical models from theory alone.

57The LSE econometricians found themselves recasting econometric modeling by inventing new concepts and methods while striving to find or adapt a suitable foundation in one or another philosophy of science (Kuhn, Popper, Lakatos); see Hendry (1980), Hendry and Richard (1982). While loosely reflecting on a Popperian conception to criticize and a Lakatosian demand for ‘progressiveness’, these philosophical approaches did not provide an appropriate framework wherein one could repudiate the misleading charges leveled against the LSE tradition; see Spanos (2010a).

4.1 Textbook Econometrics at the LSE

58The LSE courses in econometrics during my full-time student days [19731979] were based on traditional textbooks; Johnston (1963/1972), Malinvaud (1966/1970) and Theil (1971) for undergraduate courses, and Schmidt (1976) and Hood and Koopmans (1953) for Sargan’s graduate courses. Although the material taught in econometric courses was largely traditional (Sargan, 1988b), it had several distinct differences in emphasis. The first difference was the broader and more balanced grounding in statistical theory, including estimation, testing and prediction, well beyond the definitions and summaries found in the recommended textbooks. The emphasis on Maximum Likelihood Estimation (MLE), likelihood ratio and related frequentist procedures associated with Fisher, Neyman and Pearson, was engendered by the synergy and close collaboration between the economics and statistics departments. In 1963 Durbin wrote a paper entitled: "Maximum Likelihood Estimation of the Parameters of a System of Simultaneous Regression Equations", that provided the motivation for Hendry (1976). The second difference was a special emphasis placed on certain modeling issues arising from time series data such as modeling temporal dependence/ heterogeneity. The third difference in emphasis was the presentation of empirical modeling as an iterative process instead of a one-shot model-fitting routine. One could also discern a certain critical perspective on textbook econometrics that encouraged the students to obviate excessive respect for the authority of the textbook and develop a more critical perspective.

59These crucial differences in the teaching of econometrics made the LSE students both aware of the path-breaking nature of the research agenda of the LSE econometricians (Leamer, Hendry and Poirier, 1990; Pagan, 1987), and confident enough in their technical background to pursue such topics in their research. The majority of the Ph.D students, following Sargan’s lead (Maasoumi, 1988b), pursued mainly technical issues arising in both time series as well as simultaneous equations modeling. In this sense, the LSE tradition participated fully in the development of technical tools in addressing crucial inference problems on the mainstream post-Cowles agenda. Indeed, there had been crucial interactions between the LSE protagonists with the North American post-Cowles tradition when Sargan spent several visits in the United States in the late 1950s (Phillips, 1985, 125):

“It certainly was very stimulating to have not only long stays at Minnesota [1958-9] and Chicago [1959-60], but also to spend some time on the West Coast in the summer of 1959 and visit the East Coast, particularly the Cowles Foundation in 1960.”

60A smaller number of students, including myself, decided to grapple with the methodological issues raised by the different perspectives. This is not unrelated to the fact that Hendry interacted with various groups of econometricians at CORE, San Diego, Yale, Berkeley and Australian National University during the period 1980-81, and had to defend the LSE methodology; see Ericsson (2004, 767).

4.2 Key Elements of the LSE tradition

61According to Hendry (2003), the methodological issues raised in Sargan (1964) largely defined the research agenda for the LSE tradition for the next 20 years or so. In a paper entitled “J. Denis Sargan and the Origins of LSE Econometric Methodology”, he summarizes the key contributions of Sargan (1964):

In this paper, Denis laid out the conceptual foundations of what has become the "LSE approach." The essential elements that he formalized included:
   (1)    the use of "long-run" economic analysis to specify the equilibrium of the model;
   (2)    the introduction of "equilibrium-correction" mechanisms into behavioral dynamic econometric models;
   (3)    the development of a new interpretation of autoregressive errors in timeseries models;
   (4)    the construction of valid misspecification tests after estimating dynamic models;
   (5)    the use of model comparison procedures for linear against logarithmic specifications;
   (6)    the investigation of the impact of data transforms on the selection of models;
   (7)    a nonlinear in parameters instrumental variables estimator for measurement errors;
   (8)    the development of operational computer programs to implement the new econometric methods;
   (9)    a proof that his iterative computations would converge with near certainty; and
   (10)    matching the econometric theory to the substantive empirical modeling problem.

62In elements (1)-(2) Sargan proposed innovative ways to bridge the gap between dynamic statistical models and static structural models in terms of the long-run equilibrium and the error-correction term due to Phillips (1957). In his reply to Ball’s criticisms that his wage equation does not accord well with the theoretical demand and supply functions for labor Sargan (1964, 60), argued:

it is usual to think of the type of wage equation that I have been estimating as a price-adjustment equation, and also that a more complete model of this type would treat unemployment as an endogenous variable. To do this would require an equation explaining the actual number employed, and the actual number retaining their names on the employment exchange registers. But can these be considered the same as the demand and supply of labour? In the body of the paper I give reasons for doubting this.

63The ‘error-correction’ formulation had its roots in Phillips (1957), but it was popularized by Davidson et al. (1978) and used widely in empirical modeling because of its success in improving a model’s forecasting ability. It also proved instrumental in initiating the extensive literature on ‘cointegration’; see Granger (1981), Hendry (1986), Engle and Granger (1987), Johansen (1991; 1995).

64The initial seeds for the notion of cointegration were garnered in a discussion between Hendry and Granger in 1980 after a seminar given by Hendry at the monthly meeting of the SSRC Econometrics Workshop.

65Granger called into question the ‘validity’ of the basic error-correction model:

Image 100002010000028B0000002310B2B336399C9C07.png

on the grounds that it is ‘unbalanced’; see Granger (1990, 12). In the terminology of cointegration developed later, Granger was arguing that if the time series Image 10000201000000450000001938BE6F93EFADDED7.png were integrated of order 1, denoted by Image 100002010000006100000019D09726F9A373B9C8.png and Image 100002010000006000000019A42F9D8B446B7F0D.png , then Image 100002010000007400000019191EA6E29C092AE3.png and Image 100002010000007400000019E46F432645AA6880.png , but the error-correction term Image 10000201000000DD00000019D770805C431F45AE.png . After further discussion at the end of the seminar, which I witnessed, they agreed to disagree on the cogency of dynamic specifications like (7), and I was asked by Hendry to run some simulations to see if they shed any light on the disagreement. The initial simulations that evening seemed to support Granger’s doubts because the recursive estimator of the coefficient Image 100002010000001800000019BDB3D54AFF977987.png did not seem constant, and I relayed that information to Hendry the next day. It turned out that under different conditions they were both right.

In element (3) Sargan proposed to view dynamic models in terms of the observable process Image 1000020100000109000000194A4CD2B48EFE6972.png underlying the data Image 10000201000000BF0000001960CA29C645BC206F.png , where Image 1000020100000082000000199C37DD7C4EE294D4.png , by viewing the Linear Regression model with an AR(1) error (see (2)) as a restricted form of a Dynamic Linear Regression model:

Image 100002010000028B00000027626FAEAB41757BA1.png

with the restrictions taking the form of the (non-linear) common factors:

Image 100002010000028B00000026DD876D3543E15013.png

66These restrictions stem from the fact that:

Image 100002010000028B000000257B3F435B2166F611.png

67This departure from the textbook viewpoint was very important for several reasons.

(a) It placed the observable process Image 100002010000006A00000019EF46A7B6576AC144.png and its probabilistic structure at center stage, and unveiled the distributional reduction yielding the parameterization implicit in different statistical models. The reduction for (8) is: Image 100002010000021D000000193B75855DC786E5AF.png , with Image 10000201000000B6000000198D4C1810813A0BA5.png the distribution underlying (8) and Image 10000201000000B800000019BCFF2E1B4F016C44.png . The reduction is primarily due to Hendry’s collaborative work with Richard (Richard, 1980), and provided the key to elucidating the notion of weak exogeneity; see Hendry and Richard (1982), Engle, Hendry and Richard (1983).

(b) It brought statistical models like (8) when specified directly in terms of the observable processes Image 10000201000000D5000000199B97EF870070CC42.png . These restrictions, although testable, are rarely data-acceptable. As shown in McGuirk and Spanos (2008), the common factor restrictions in (9) impose highly unappetizing restrictions on the temporal structure of the vector process Image 10000201000000D800000019C3E166E2C1E2CE00.png that involve several Granger non-causality presumptions! More generally, it showed that it is always more general to model the observable processes involved directly instead of indirectly via the error term.

(c) It highlighted an important difference in attitude toward departuresfrom model assumptions between the two traditions. For the textbook econometrics tradition such departures are viewed as a problem and a nuisance to be ‘corrected’. In contrast, for the LSE tradition such departures are not a nuisance but a blessing, since the modeler can use the additional statistical information to improve both the reliability and precision of inference; see Hendry and Mizon (1978).

(d) It brought out the importance of keeping track of the relevant errorprobabilities in sequential testing by introducing the general-to-specific procedure first introduced by Anderson (1962). This was in contrast to the textbook econometrics tradition that favored simple-to-general modeling procedures.

(e) It revealed the questionable nature of the textbook strategy of adopting the alternative model when the D-W test rejects the null (step 4), and offered more general ways to account for the presence of the temporal dependence, e.g. respecifying the original Linear Regression into the Dynamic Linear Regression model. It is interesting to note that in the case of Linear Regression with an AR(1) error term (see (2)), Durbin (1960) argued in favor of ignoring the common factor restrictions and estimating the parameters of the Dynamic Linear Regression model using OLS.

68Element (4), relating to M-S testing was initiated in Sargan’s early writings and enhanced by Durbin’s contributions in this area, represents another crucial departure from textbook econometrics. As argued by Hendry (1980, 406):

The three golden rules of econometrics are test, test and test: that all three rules are broken regularity in empirical applications is fortunately easily remedied. Rigorously tested models, which adequately describe the available data, encompass previous findings and were derived from well-based theories would greatly enhance any claim to be scientific.

69A strategy for M-S testing was initially formalized by Mizon (1977) and applied more broadly by other members of the LSE tradition, especially Hendry and his coauthors; see Davidson et al (1978), Hendry (1980). This also encouraged practitioners to use graphical techniques that bring out the chance regularities in the data with a view to render statistical model specification more effective; see Spanos (1999). The LSE perspective favored a thorough probing of the model assumptions to account for all statistical information in the data and respecify if the model is misspecified.

70Elements (5)-(6) pertain to model validation and model selection procedures that played an important role in the modeling practices of the LSE tradition; see Sargan (1973). Choosing between a linear and log-linear specification arose naturally in the context of modeling with time series data, and since the two specifications were non-nested parametrically one needed alternative ways to the Neyman-Pearson testing to choose between them. Ultimately, however, the issue of choosing between the two specifications is one of statistical adequacy (the model assumptions are valid for the data), and the linear vs. log-linear specifications differ in more ways than just the functional form of the regression function; see Spanos, Hendry and Reade (2008). Hence, Sargan proposed to specify one’s statistical model in a way that ensures that error term is approximately white-noise. This is in contrast to the textbook perspective which encourages the practitioner to retain the original theory-model and change the probabilistic assumptions of the error term.

71The LSE tradition’s answer to the problem of choosing among (parametrically) non-nested models was the encompassing principle and the associated procedures; see Mizon (1984), Mizon and Richard (1986), Hendry and Richard (1989).

72In light of the fact that the LSE tradition encouraged the specification of statistical models with lags and trends, even when the structural model was static, the need for a systematic way to test downwards from a general to more specific models arose naturally. Anderson (1962, 1971) provided the answer in the sense that it showed how one can begin with a general specification and test sequentially downwards using Neyman-Pearson testing and keeping track of the error probabilities. Mizon (1977) extended these results to non-ordered hypotheses and non-linear restrictions. This led to the General-to-Specific procedure that grew into a more distinct methodology associated with David Hendry and his coauthors because of its key role in guiding model validation and selection; see Hendry (2000), ch. 19, Campos et al. (2005).

73Element (7) constitutes an example of several crucial inferential methods and procedures put forward by the LSE tradition that were often motivated by their experience in empirical modeling with time series data.

74Sargan (1958, 1959) greatly generalized the Instrumental Variables (IV) method in the context of the SEM that included dynamic specifications and non-linearities. Of particular interest are several papers on estimation, identification and testing in the context of the SEM with special emphasis on the finite sample properties of structural parameter estimators (IV, 2SLS, 3SLS, FIML) and tests, as well as dynamic specifications, using Edgeworth and Gram-Charlier approximations with a view to improve the asymptotic sampling distributions. Monte Carlo simulations were also used extensively to study the sampling distributions of such estimators and tests; see Maasoumi (1988b), Phillips (1985). This was clearly motivated by the practical problem of undue reliance on asymptotic theory even in cases of small sample sizes; Sargan (1964) relied on n=16. As Sargan explains (Phillips, 1985, 126): “I had been worried for some time that all our theory except for linear models was asymptotic theory, and I realized that the Edgeworth expansion was a way forward.

75Elements (8)-(9) represent another component of the LSE tradition that helped to make available to practitioners a lot of the innovative procedures proposed by its members. In the early 1960s Sargan wrote the code for RALS for his 1964 paper and Hendry continued that tradition with GIVE (Generalized Instrumental Variables Estimation) and PcGive; the latter has been continuously updated and widely used to this day. Hence, from the mid 1960s onwards the writing of computer programs to implement estimation and testing procedures was an important feature of applied research in econometrics at the LSE and continued unabated to this day, particularly in Oxford. This has helped to broaden the appeal of the LSE tradition because practitioners could use the software to implement its innovative procedures.

76David Hendry played a crucial role in enhancing and developing further the themes and methodological issues initiated by Sargan (1964). He popularized and enhanced the modeling procedures and strategies in the form of the General to Specific modeling that can be implemented using PCGIVE; see Gilbert (1986, 1989), the papers in Hendry (2000) and ch. 19-20,

77Hendry (1995, 1987, 2009), Mizon (1995) and the papers in Campos, Ericsson and Hendry (2005). More recently, a related software program, known as ‘Autometrics’, is designed to implement the LSE modeling methodology, including model selection and data mining issues, in an automated and more systematic way; see Doornik (2009), Hendry and Mizon (2011), Castle et al. (2012).

78Element 10 on ‘matching the econometric theory to the substantive empirical modeling problem’ represents the key feature of the LSE tradition. It pertains to how their empirical work, beginning with Sargan’s 1964 wageprice model, strived to bridge the gap between economic theory and data by accounting for the regularities in the data without ignoring pertinent theory information.

4.3 The LSE Tradition was Never Taught at the LSE

79From a teaching perspective, the LSE tradition at its place of birth remained largely an ‘oral tradition’. This is primarily due to the fact that its first ‘official’ textbook, Hendry (1995), had an extended gestation period. This was long after Hendry left the LSE for Oxford University in 1982.

80The LSE tradition was primarily reflected in the research of the LSE econometrics group and their seminar series, such as the weekly SSRC funded workshop on “Specification and Estimation Problems with Dynamic Econometric Models” (1974-6), and the monthly meetings of the SSRC Econometrics Workshop. The first attempt to demarcate this tradition by contrasting it to the textbook approach was made in Hendry and Wallis (1984), a volume dedicated to Sargan, with contributions by Steve Nickell, Andrew

81Harvey, Jean-François Richard, Adrian Pagan, Grayham Mizon, Pravin Trivedi, David Hendry and Meghnad Desai, along side a reprint of Sargan (1964).

82Sargan (1988), based on his recorded lectures on the Advanced Econometric Theory course during the academic year 1983-4, represents a rigorous presentation of traditional textbook methods with special emphasis on asymptotic theory, the SEM, OLS, GLS and Instrumental Variables, maximum likelihood methods and alternative testing procedures. In this book the LSE tradition is reflected in occasional comments and its oblique focus on modeling the dynamics, but little else. Indeed, Sargan always saw himself as working within the post-Cowles tradition. His alternative perspective only concerned the practical aspects of relating theory to data, in general, and the empirical aspects of modeling with time series data, in particular. Peter Robinson, who succeeded Sargan in the Tooke Chair in 1984, had no interest in the methodological issues raised by the LSE tradition, but continued Sargan’s predilection for rigorous mathematical arguments in the presentation of textbook econometrics.

5 Retrospective and Perspective

The process of blending the above methodological insights from Haavelmo (section 2.1) into the LSE tradition in econometrics to address certain key methodological problems began by focusing on modeling the observable process Image 100002010000006B00000019D3B6CC7368CC8812.png underlying data Image 100002010000001C000000197D8D7C12A1D5043B.png , instead of making probabilistic assumptions about error terms; see Spanos (1986). This section elaborates on how the key methodological problems raised by the LSE tradition can be addressed in the context of this framework, as well as reply to several charges leveled against this tradition by its critics, including Hansen (1996, 1999), Faust and Whiteman (1997), and Wooldridge (1998).

5.1 Haavelmo and the LSE Tradition

The key to elucidating and addressing the methodological problems [A][B] (section 1.1) was the untangling of the statistical from the substantive premises. The answer was inspired by Haavelmo’s SEM and his emphasis on the joint distribution of the observables Image 10000201000000BA0000001980FDDAD153B0A47C.png . Behind a structural model:

Image 100002010000028B00000024020581BB93DC9C1A.png

there is a reduced form which is in essence the (implicit) statistical model:

Image 100002010000028B0000002751C5A0799BF612E3.png

with (10) and (11) related via the identifying restrictions:

Image 100002010000028B00000030A9E187923E2CE1F0.png

The substantive Image 100002010000004600000019A2F2E88CADD5D4B7.png and statistical Image 10000201000000460000001958D6E1A179224CC9.png premises can be disentangled by viewing the former as based on the theory and the latter as a parameterization of the observable process Image 10000201000000A0000000197018B218EE25FB5A.png , as given in table 1 in terms of the testable probabilistic assumptions [1]-[5]; not as derived from Image 1000020100000044000000191593CF328945E627.png .

This provides a purely probabilistic construal of Image 100002010000004600000019939B63BA6F06A3B6.png , with the Statistical Generating Mechanism (GM) being viewed as an orthogonal decomposition of the form:

Image 100002010000028B0000002287A3C6C3F7D091FD.png

where Image 1000020100000082000000196C7577E985B49FAD.png  denotes the systematic component, with Image 100002010000008700000019F2C38A9F4CDB773A.png the relevant conditioning information set chosen with a view to render the ‘educed’ non-systematic component Image 10000201000000AD00000019DC7DDAD82C548771.png a martingale difference process, i.e. Image 1000020100000079000000193C16B3FAFA704E1C.png . In this sense, the statistical error term Image 10000201000000170000001902AC909034B6D0D3.png is [i] derived and represents non-systematic statistical information in Image 100002010000001C00000019D3ACF3979860E2F4.png relative to Image 10000201000000170000001906EEDB23CC15F9DE.png , and [ii] local in the sense that it pertains to the statistical model Image 1000020100000045000000194F5159DD70D771E2.png vis-a-vis the data Image 100002010000001C00000019D3ACF3979860E2F4.png . In contrast, the structural error term Image 100002010000001500000019AF8BFF71316346AF.png is [i]* autonomous and could represent errors of measurement, errors of approximation, omitted effects, shocks etc., as well as [ii]* global in the sense that it pertains to the structural model Image 1000020100000047000000190CB1EC2004319450.png vis-a-vis the phenomenon of interest. Image 1000020100000045000000194F5159DD70D771E2.png is specified in terms of Image 1000020100000072000000191FD00E022B9FD078.png via the probabilistic reduction:

Image 100002010000028B000000580F85B78E2209A558.png

rendering Image 1000020100000069000000192D9E16C1D307B07C.png a particular parameterization of the process Image 1000020100000069000000191D03AC09F99206A5.png .

Image 100002010000041B0000017EAF866F6EB7D912B0.png

From this perspective, the choice of Image 1000020100000045000000194F5159DD70D771E2.png begins with data Image 100002010000001C00000019D3ACF3979860E2F4.png , irrespective of the theory or theories that led to its choice. Once selected, data Image 100002010000001C00000019D3ACF3979860E2F4.png take on ‘a life of its own’ as a particular realization of a generic process Image 100002010000006C00000019E64F618DC38EF86E.png . The link between data Image 100002010000001C00000019D3ACF3979860E2F4.png and the process Image 100002010000006E000000193B54D439548E37D7.png is provided by a pertinent answer to the key question: ‘what probabilistic structure, when imposed on the process Image 100002010000006A000000195F5CB65CC019D483.png , would render data Image 100002010000001C00000019D3ACF3979860E2F4.png a truly typical realization thereof?’ (Spanos, 2006a). The answer offers the relevant probabilistic structure for Image 100002010000006B0000001926AA60DF3E0D3A9A.png , which gives rise to the model in table 1. An answer that can be assessed using thorough M-S testing to assess the validity of model assumptions, such as [1]-[5]; see Mayo and Spanos (2004). The structural model Image 1000020100000047000000190CB1EC2004319450.png enters the picture when choosing a particular parameterization Image 100002010000003300000019B403FDD0087482E9.png for Image 100002010000006C00000019E7E0692AAC9D959B.png so that Image 1000020100000047000000190CB1EC2004319450.png is nested parametrically in Image 1000020100000045000000194F5159DD70D771E2.png  via Image 1000020100000072000000191FBA2A0DED621533.png .

83Generalizing the above distinction, one can argue that behind every structural model, generically specified by:

Image 100002010000028B0000002681A65DF5F3E923D5.png

where Image 10000201000000460000001956137C01AE5961BD.png is the joint distribution of the sample Image 100002010000009D00000019FF2CA01E631DF677.png , there exists (often implicit) a statistical model, taking the generic form:

Image 100002010000028B000000262FC313CF3751670A.png

that can be viewed as a parameterization of the observable stochastic process Image 100002010000006A000000199D51B9517CC1B2F8.png underlying data  , and the statistical adequacy of Image 1000020100000045000000194F5159DD70D771E2.png underwrites the reliability of all inferences based on Image 1000020100000047000000190CB1EC2004319450.png via Image 10000201000000710000001993CBAD99CE6C1617.png .

This perspective enables one to assess the statistical validity of Image 1000020100000045000000194F5159DD70D771E2.png , by testing its assumptions, e.g. [1]-[5], independently of Image 1000020100000047000000190CB1EC2004319450.png , since [1][5] concern only the data Image 100002010000001C00000019D3ACF3979860E2F4.png . This purely probabilistic construal of Image 1000020100000045000000194F5159DD70D771E2.png enables one to delineate the two very distinct questions which are often conflated:

[a] statistical adequacy: does Image 1000020100000045000000194F5159DD70D771E2.png account for the chance regularities in Image 100002010000001C00000019D3ACF3979860E2F4.png ?

[b] substantiveadequacy: does the model Image 1000020100000047000000190CB1EC2004319450.png adequately capture (describes, explains, predicts) the phenomenon of interest?

Statistical adequacy is established by probing thoroughly the assumptions of Image 1000020100000045000000194F5159DD70D771E2.png using trenchant M-S tests and ascertaining that no departures are detected. This addresses the concerns of the LSE tradition about statistical misspecification by ensuring that any inferences based on Image 1000020100000045000000194F5159DD70D771E2.png are reliable in the sense that the actual error probabilities for a test, a confidence/prediction interval, approximate closely the nominal (assumed) ones. Applying a .05 significance level test, when the actual type I error is closer to .9 will lead an inference astray; such discrepancies can arise with, what in a textbook econometric terms might be described as, ‘minor’ departures. What matters is not the ‘size’ of the departure but the magnitude of the discrepancy between actual and nominal probabilities it induces; see Spanos and McGuirk (2001).

Image 1000020100000045000000194F5159DD70D771E2.png is built exclusively on the statistical information contained in data Image 100002010000001C00000019D3ACF3979860E2F4.png , and acts as a mediator between Image 1000020100000047000000190CB1EC2004319450.png and Image 100002010000001C00000019D3ACF3979860E2F4.png . The ontological commitments in specifying Image 1000020100000045000000194F5159DD70D771E2.png concern the existence of:

[A] a rich enough probabilistic structure to ‘model’ the chance regularitiesin Image 100002010000001C00000019D3ACF3979860E2F4.png ,

[B] a Image 100002010000003700000019991B833F59EEDDB6.png such that Image 10000201000000C30000001903075C4EDD0705BF.png , Image 100002010000003D000000199903DAB83039C4B2.png , could have generated Image 100002010000001C00000019D3ACF3979860E2F4.png .

On the other hand, Image 1000020100000047000000190CB1EC2004319450.png is viewed as aiming to approximate the actual mechanism underlying the phenomenon of interest by using abstraction, simplification, and focusing on particular aspects (selecting the relevant observables Image 100002010000001800000019913B02F30BB4533C.png ) of this phenomenon, and should be assessed as such. To establish substantive adequacy one needs to secure statistical adequacy first, and then proceed to probe for several potential errors, like omitted but relevant factors, false causal claims, etc.

It is important to note that the notion of statistical adequacy is related to the LSE tradition’s notion of congruence with some important differences, including the fact that congruency assumes ‘homoscedastic, innovation errors’ and ‘theory consistent, identifiable structures’ (Hendry, 1987). Statistical adequacy assumes (indirectly) martingale difference errors (that could be heteroskedastic), and it purposely excludes any form of theory consistency to allow one to separate, ab initio, the statistical from the substantive assumptions. As argued next, the distinction between the statistical Image 1000020100000045000000194F5159DD70D771E2.png and the substantive Image 1000020100000047000000190CB1EC2004319450.png premises is instrumental in elucidating and addressing several crucial methodological issues and problems in econometrics, as well as countering the critics of the LSE tradition.

5.2 Substantive vs. Statistical Premises of Inference

841. ‘Realisticness’ vs. statistical misspecification. The confusion between substantive vs. statistical inadequacy is pervasive in the pre-eminence of theory literature as exemplified by claims like Prescott’s (1986, 84): “The models constructed within this theoretical framework are necessarily highly abstract. Consequently, they are necessarily false, and statistical hypothesis testing will reject them.”

It is one thing to say that a structural model Image 1000020100000047000000190CB1EC2004319450.png is a crude approximation of the reality it aims to capture, and entirely another to claim that the implicitly assumed statistical model Image 1000020100000045000000194F5159DD70D771E2.png could not have generated data Image 100002010000001C00000019D3ACF3979860E2F4.png , which is what statistical inadequacy amounts to. Hence, a structural model may always come up short in securing a substantively adequate Image 1000020100000047000000190CB1EC2004319450.png for the phenomenon of interest, but Image 1000020100000045000000194F5159DD70D771E2.png may be perfectly adequate for answering substantive questions of interest. Hence, there is nothing wrong with constructing simple, abstract and idealized theory-models. It becomes problematic when the data Image 100002010000001C00000019D3ACF3979860E2F4.png  are given the subordinate role of ‘quantifying’ Image 1000020100000047000000190CB1EC2004319450.png in ways that (i) largely ignore the probabilistic structure of the data, (ii) employ unsound links between Image 1000020100000047000000190CB1EC2004319450.png and the data Image 100002010000001C00000019D3ACF3979860E2F4.png , like calibration and moment matching, and (iii) the probing of the substantive adequacy of Image 1000020100000047000000190CB1EC2004319450.png is ignored; see Spanos (2014).

2. Statistical model validation vs. inference. The above perspective brings out the distinct nature stemming from the different questions they pose to the data by the statistical model validation and the inferential components of modeling. M-S testing assesses whether the family Image 1000020100000088000000192842132EEDC37A8F.png could have generated Image 100002010000001C00000019D3ACF3979860E2F4.png , regardless of the ‘true’ value Image 10000201000000170000001937059948069F143B.png of Image 100002010000000E00000019868929E4A7054417.png . Statistical inference takes that for granted and aims to narrow Image 10000201000000130000001994BC91670E1F766C.png down to Image 100002010000002100000019275D447C138D655F.png whatever Image 10000201000000170000001937059948069F143B.png happens to be! The former precedes the latter and constitutes a separate stage of empirical modeling that secures the reliability of inference Moreover, blending the two components into an overall decision theoretic problem can lead to fallacious framing like the pre-test bias claim; Spanos (2010a).

Hence, a structural model Image 1000020100000047000000190CB1EC2004319450.png in the context of the SEM is said to be empirically valid when (Spanos, 1990):

a. the implicit statistical model Image 1000020100000045000000194F5159DD70D771E2.png is statistically adequate and

b. the overidentifying restrictions: Image 1000020100000075000000199AA3D8E1E6EF1005.png are data-acceptable.

The testing in (b) is a signature issue for the LSE tradition aiming to distinguish between pertinent and non-pertinent substantive information, and constitutes the first step towards establishing the substantive adequacy of Image 1000020100000047000000190CB1EC2004319450.png vis-a-vis the phenomenon of interest. Under (a)-(b) the estimated empirical model Image 10000201000000FF00000019B08970057F20B35E.png , enjoys both statistical and theoretical meaningfulness. Hence, it can be used as the basis of inferences, including prediction and policy simulations. This perspective passes the onus of bridging the gap between theory and data onto the theorist, by calling for structural models that are empirically valid in the sense of (a)-(b). In this sense, the LSE’s link between the theory and the data in the form of long-run solutions and error-correction specifications, although expedient, are too weak if the primary objective is to secure substantively adequate structural models.

3. Time-series vs. cross-section data. Viewing a statistical model Image 1000020100000045000000194F5159DD70D771E2.png as a parameterization of Image 100002010000006B0000001987C4CF7717C78834.png renders the distinction between timeseries and cross-section data models (see Wooldridge, 2012, 344) misleading, since viewing data as realizations of stochastic processes is equally applicable to both types of data. The only tenuous difference between the two types of data is that for time series data there is one natural ordering, time, which is an interval scale variable, but for cross-section data there might be several natural orderings of interest, like spatial location, size, gender, age, etc., whose scale of measurement might be ordinal, nominal or interval. Hence, the claim that for cross-section data one does not need to worry about dependence or/and heterogeneity is misguided. The LSE tradition’s concerns about statistical misspecification and the ensuing unreliability of inference for time series data are even more relevant for cross-section data.

4. Revisiting the Gauss-Markov theorem. Despite its historical importance in the development of statistical modeling and inference, when viewed from the above perspective the Gauss-Markov theorem is at best of very limited value and at worst highly misleading. First, ‘linearity’ Image 1000020100000043000000193CBC1A990D720C24.png is a phony property, unbiasedness Image 100002010000005500000019071C345ABD318089.png without consistency is useless, and relative efficiency within an artificially restricted class of estimators is of very limited value, the theorem does not provide a sufficient enough basis for inference. For instance, it cannot be used to test Image 10000201000000640000001949F3BA450BE791D7.png , since knowing that Image 10000201000000980000001971556D50FC017C4A.png  with D(.) unknown provides insufficient information for reliable inferences; see Bahadur and Savage (1956). Second, broad statistical premises yield imprecise inferences that often invoke Image 100002010000005C00000019FECA8496F08BDF49.png without any assurance of enhanced reliability. Indeed, when such broad premises include non-testable assumptions, as in the case of nonparametric models, the reliability of inference is at best unknown. Learning from data takes place when one applies reliable [actual error probabilities Image 100002010000001400000019B459F2C7F3EDBD7A.png nominal ones] and incisive inferences [optimal methods] stemming from the statistical adequacy of Image 1000020100000045000000194F5159DD70D771E2.png ; Spanos (2012). Statistical adequacy is the price one has to pay to secure learning from data. Hence, the emphasis on complete and internally consistent set of probabilistic assumptions pertaining to observable process Image 100002010000006A0000001986E3B452317FA9A8.png underlying Image 100002010000001C00000019D3ACF3979860E2F4.png , in contrast to an incomplete set of error term assumptions, mixed in with substantive assumptions like ‘no omitted variables’, etc.; Spanos (2010c).

5. Revisiting Instrumental Variables (IV). The above distinction between Image 1000020100000047000000190CB1EC2004319450.png and Image 1000020100000045000000194F5159DD70D771E2.png sheds very different light on IV estimators and the choice of ‘optimal’ instruments. Behind every IV estimator there is an implicit reduced form whose statistical adequacy is taken for granted. However, if the latter is statistically misspecified, the sampling distribution of the IV estimator will differ from the assumed, and that would give rise to unreliable inferences. Hence, the choice of instruments should be based on a statistically adequate reduced form which would often require respecification to include lags and trends in the case of time series data. That is, the choice of instruments is not based solely on theoretical information; statistical information plays a crucial role in determining the optimal instruments needed to secure the statistical adequacy of the implicit reduced form; see Spanos (1986).

85Similarly, despite confident declarations to the contrary:

“One must decide which variables are endogenous and which are conditioning variables using outside criteria.” (Wooldridge, 1998, 297).

Statistical information plays a crucial role in determining which variables can be treated as conditioning variables. As shown in Spanos (1994), when the distribution Image 100002010000006000000019C80A5618971F7324.png in (13) is Student’s t, weak exogeneity (Engle et al, 1983) does not hold for statistical reasons, and thus one needs to retain Image 10000201000000660000001918F138E17E8E8A7F.png for inference purposes.

6. Model validation vs. model selection. The same distinction clarifies the difference between model validation at the statistical level, which refers to establishing the statistical adequacy of Image 1000020100000045000000194F5159DD70D771E2.png , and model selection at the substantive level which concerns Image 1000020100000047000000190CB1EC2004319450.png . The problems of omitted variables or selecting the relevant regressors (Sargan, 1981) belongs to the latter category. What is crucial when posing substantive questions of interest, however, is the reliability of the test which is secured when Image 1000020100000045000000194F5159DD70D771E2.png is statistically adequate. No evidence for or against a structural model Image 1000020100000047000000190CB1EC2004319450.png can be established on the basis of a misspecified Image 1000020100000045000000194F5159DD70D771E2.png .

This relates to the LSE strategy of general-to-specific in conjunction with encompassing that aim to address model validation and selection simultaneously. This strategy is most effective in the special case where (i) all the potentially relevant variables are included in data Image 100002010000001C00000019D3ACF3979860E2F4.png at the outset, and (ii) the general family of models selected includes a statistically adequate one. However, irrespective of whether one uses a general-to-specific or a specific-to-general testing procedure, the key issues are: (i) keep track of the relevant error probabilities, and (ii) ensure that inferences rely on a statistically adequate model to secure their reliability; see Spanos (2006b).

Akaike-type model selection: Sargan’s intuition that Akaike-type criteria, like the AIC, are inadequate for model selection (see Phillips, 1985, 133) is fully justified when viewed in the context of the above modeling perspective. It can be shown that the AIC ranking of the different models is inferentially equivalent to pairwise comparisons among the different models in Image 10000201000000A0000000197458B86C05A40001.png , using N-P testing, but with a serious flaw: it ignores the relevant error probabilities. Moreover, these model selection procedures are in direct conflict with model validation using thorough MS testing to secure statistical adequacy. This is because M-S testing would give rise to a choice of a particular model within Image 100002010000003B0000001905BB8BB7A43908E1.png , assuming it includes such an adequate model, but this choice will rarely coincide with the AIC highest ranked model. Worse, the AIC will yield a highest ranked model even when Image 100002010000003B0000001905BB8BB7A43908E1.png does not include a statistically adequate one; see Spanos (2010b).

867. Spurious results. Spurred by the general impression that all statistical methods which rely on ‘regularities’ in the data are highly susceptible to the statistical spuriousness problem, textbook econometricians criticize the LSE perspective as indulging in a more sophisticated form of ‘data mining’; see Faust and Whiteman (1997). Contrary to such view, statistical adequacy provides the key to explaining spurious results, including the classic paper by Yule (1926) relating to the LR model as the result of departures form the model assumptions, such as [1]-[5] (table 1). Similarly, the Granger and Newbold (1974) spurious results stem from the fact that the simulated data have temporal dependence which is ignored by the estimated LR model. Indeed, their simulation results constitute a classic example of the actual error probabilities being different from the nominal ones due to statistical misspecification. Phillips (1986) derived the sampling distributions of the estimated parameters under the misspecification to shed light on simulation results. Despite its unquestionable importance, such derivations do not address the problem of spurious results. For that one needs to respecify the LR model to account for the temporal dependence in the data ignored by the original specification using the Dynamic Linear Regression:

Image 100002010000028B000000235752D225409E2C02.png

8. Mis-Specification (M-S) testing: thoroughly probing the validity of the probabilistic assumptions of Image 1000020100000045000000194F5159DD70D771E2.png , e.g. [1]-[5], vis-a-vis data Image 100002010000001C00000019D3ACF3979860E2F4.png . This only concerns the question [a] above, and constitutes ‘testing outside’ the boundaries of Image 1000020100000045000000194F5159DD70D771E2.png aiming to exhaustively probe the set Image 1000020100000031000000195F2906BA0D8FE334.png of all possible models that could have given rise toImage 100002010000001C00000019D3ACF3979860E2F4.png . The generic hypotheses for M-S testing take the form:

Image 100002010000028B00000021E548EC015B874C18.png

where Image 1000020100000038000000192DB644C40531DDC2.png is the ‘true’ distribution of the sample. This framing should be contrasted with N-P testing which constitutes ‘testing within’ Image 1000020100000045000000194F5159DD70D771E2.png :

Image 100002010000028B000000214B11AF7D9E1510A1.png

That is, M-S testing is proper statistical testing, but different from N-P testing! The differences between them raise a number of conceptual and technical issues, including ‘how to particularize Image 10000201000000980000001969C16DBA2087BC1B.png to construct M-S tests’, ‘how to interpret a rejection of H’, and ‘how to secure the effectiveness/reliability of the diagnosis’. The latter can be rendered effective by following specific strategies, including:

(a) astuteordering of M-S tests so as to exploit the interrelationship among the model assumptions with a view to ‘correct’ each other’s diagnosis using,

(b) joint M-S tests (testing several assumptions simultaneously) designed to minimize the maintained assumptions, and

(c) combining parametric (high power but narrow scope) and nonparametric (low power but broad scope) tests; see Spanos (2000, 2010b).

87These features of M-S testing can be used to explain away several confusions and misplaced charges level against it, including infinite regress, circularity, double counting, multiple testing and data mining; see Spanos (2010a-b).

9. Respecification of Image 1000020100000045000000194F5159DD70D771E2.png to account for systematic information in the data calls for returning to the stochastic process Image 100002010000006A00000019E9C4EE474716DC35.png underlying data Image 100002010000001C00000019D3ACF3979860E2F4.png with a view to choose a more pertinent probabilistic structure; see Spanos (1994). In the case of time series data NIID that underlies the model in table 1 is likely to be impertinent and replacing it with Normal, Markov, mean-heterogeneous and covariance stationary might be more appropriate. Hence, the LSE strategy of using statistical models with trends and lags in the case of time series data is often the only sound move if ‘learning from data’ is to be attained. This calls for replacing Image 100002010000008500000019AFAC4AFA803882BB.png with Image 10000201000000C400000019A061117D23BFEDC9.png , irrespective of the theory, since the parametric nesting via Image 100002010000007200000019ACE391FE887E1E2B.png is easily retained when adding trends and lags. For substantive adequacy purposes, however, one needs to replace generic terms like trend polynomials—which represent ignorance—with relevant explanatory variables; see Spanos (2010c).

88When viewed from the above perspective, the ‘error-fixing’ strategies of the textbook approach, like error-autocorrelation and heteroskedasticity ‘corrections’ can be blamed for contributing significantly to the untrustworthiness of the empirical evidence in econometrics journals. This is primarily due to two interrelated sources: (a) the neglect of establishing statistical adequacy, and (b) setting up the fallacy of rejection as normal practice. Hence, textbook econometric arguments such as: “my recommendation to applied researchers would be to omit the tests of normality and conditional heteroskedasticity, and replace all conventional standard errors and covariance matrices with heteroskedasticity-robust versions.” (Hansen, 1999, 195) are misplaced because the form of non-Normality could matter, and the ‘corrections’ do nothing to address the unreliability of inference problem that concerns the discrepancy between actual and nominal error probabilities; Spanos and McGuirk (2001), Spanos and Reade (2014).

Similarly, textbook claims like ‘departures from the no-correlation assumption only affect the efficiency and not the unbiasedness and consistency of the OLS estimator Image 10000201000000A4000000196C9C15F46E83C9E2.png are highly questionable. This is because this claim relies on two dubious (but LSE-tradition testable) presuppositions:

(i) the Dynamic Linear Regression model in (8) is statistically adequate for the particular data Image 100002010000001C00000019D3ACF3979860E2F4.png  – evading the fallacy of rejection–, and

(ii) the common factor restrictions (9) are valid for Image 100002010000001C00000019D3ACF3979860E2F4.png .

In practice, the stipulations (i)-(ii) are unlikely to hold, rendering both the OLS Image 100002010000000E000000194DDD8919625145B3.png and the GLS Image 10000201000000DF00000019EAA700034AEA507D.png estimators inconsistent, giving rise to untrustworthy evidence; see McGuirk and Spanos (2008). 10. Addressing Fallacies. The distinction between Image 1000020100000047000000190CB1EC2004319450.png and Image 1000020100000045000000194F5159DD70D771E2.png in conjunction with the notion of severity can be used to address certain foundational problems associated with frequentist testing, including safeguarding inferences against:

(a) the fallacy of acceptance: interpreting accept Image 100002010000001F0000001944DBCF24B36148A2.png [no evidence against Image 100002010000001F0000001944DBCF24B36148A2.png ] as evidence for Image 100002010000001F0000001944DBCF24B36148A2.png ; e.g. the test had low power to detect existing discrepancy, and

(b) the fallacy of rejection: interpreting reject Image 100002010000001F0000001944DBCF24B36148A2.png [evidence against Image 100002010000001F0000001944DBCF24B36148A2.png ] as evidence for a particular Image 100002010000001E000000190E4140F994FD569C.png ; e.g. statistical vs. substantive significance (Mayo & Spanos, 2006).

A retrospective view of the textbook respecification strategy of adopting the particular alternative Image 100002010000001E000000190E4140F994FD569C.png when the M-S test rejectsImage 100002010000001F0000001944DBCF24B36148A2.png , is an example of the fallacy of rejection. For instance, a rejection of Image 100002010000001F0000001944DBCF24B36148A2.png in (5) by the D-W test (6) provides evidence against Image 100002010000001F0000001944DBCF24B36148A2.png and for the presence of temporal dependence of the generic form Image 10000201000000B200000019C5E95F805148149B.png in (3), but does not provide evidence for the particular form (4) assumed by Image 100002010000001E000000190E4140F994FD569C.png . For that, one needs to validate the assumptions of the alternative model in (2), which involves confirming the presuppositions (i)-(ii) in 9.

89Despite several papers by the LSE tradition questioning the ‘error-autocorrelation correction’ (Mizon, 1995b), Wooldridge (1998) criticizes Hendry (1995, 297):

Another notable feature of Hendry’s approach is his insistence that all conditional means have fully specified dynamics. He and some others have been on this crusade for years, and, at least in the United States, have had little impact on mainstream empirical econometrics. Static models and finite distributed lag (FDL) models are estimated routinely, often with corrections for serial correlation.

90This comment epitomizes the attitude of the textbook tradition. It has institutionalized fallacious reasoning in the form of ‘error-fixing’ strategies, and chides anybody departing from it. Any attempt to elucidate the fallacies of acceptance/rejection by replacing confusing notions with more pertinent concepts is dismissed as nothing more than “mounds of jargon” or “ill-conceived idiosyncratic treatment”.

6 Conclusions

91Since the mid 1960s the LSE tradition has contributed many innovative techniques and modeling strategies to applied econometrics. Its perspective differs from other post-Cowles traditions, in so far as it strives to strike a balance between the theory-oriented textbook econometrics and the dataoriented traditions by giving the data ‘a voice of its own’, without ignoring pertinent substantive information.

92Denis Sargan is undoubtedly the ‘father’ of the LSE tradition, but the protagonist who brought out the revolutionary nature of the LSE perspective and unflaggingly endeavored to change empirical modeling in economics was David Hendry. Their different personalities complemented each other in a way that contributed significantly to the success of that tradition. Sargan was a reluctant revolutionary because he saw himself as pursuing the agenda set out by the Cowles Commission in the early 1950s. He was a lot more comfortable discussing Instrumental Variables, Edgeworth expansions and Gram-Charlier approximations than methodological issues pertaining to empirical modeling. In contrast, Hendry relished the opportunity to compare different approaches to modeling, and break new ground by introducing alternative inference procedures and modeling strategies that improve learning from data.

93The above retrospective appraisal of the LSE tradition revealed that its key contributions revolved around Haavelmo’s call “to build models that explain what has been observed”, and its perspective could be justified on sound philosophical grounds; see Spanos (2010a; 2012). Its full impact on applied econometrics will take time to unfold, but the pervasiveness of its main message stems from the fact that, in fields like economics a statistically adequate model could play a crucial role in guiding the search for better theories (substantively adequate) by demarcating ‘what there is to explain’. This calls for paying sufficient attention to accounting for the statistical regularities in the observed data. Kepler’s ‘law’ for the elliptical motion of the planets was originally just an empirical regularity that eventually guided Newton toward his theory of universal gravitation; see Spanos (2007).

Thanks are due to Olav Bjerkholt, David F. Hendry, Peter C. B. Phillips and two anonymous referees for numerous helpful comments and suggestions.

Top of page

Bibliography

Anderson, Theodore W. 1962. The Choice of the Degree of a Polynomial Regression as a Multiple-Decision Problem. Annals of Mathematical Statistics, 33(1): 255-265.

Anderson, Theodore W. 1971. The Statistical Analysis of Time Series. New York, NY: Wiley.

Andreou, Elena, Nikitas Pittis and Aris Spanos. 2001. On Modeling Speculative Prices: The Empirical Literature. Journal of Economic Surveys, 15(2): 187-220.

Bahadur, Raghu R. and Leonard J. Savage. 1956. The Nonexistence of Certain Statistical Procedures in Nonparametric Problems. The Annals of Mathematical Statistics, 27(4): 1115-1122.

Box, George Edward P. and Gwilym M. Jenkins. 1970. Time Series Analysis: Forecasting and Control (1976 revised edition). San Francisco, CA: Holden-Day.

Brown, Robert L., James Durbin and James M. Evans. 1975. Techniques for Testing the Constancy of Regression Relationships Over Time (with discussion). Journal of the Royal Statistical Society, B 37(2): 149-192.

Cairnes, John E. 1888. The Character and Logical Method of Political Economy. Reprints of Economic Classics, 1965. New York, NY: Augustus, M. Kelley.

Campos, Julia., Neal R. Ericsson and David F. Hendry. 2005. General-toSpecific Modelling, volumes I-II. Cheltenhman: Edward Elgar.

Castle, Jennifer L., Doornik, Jurgen A., and David F. Hendry. 2012. Model Selection When There are Multiple Breaks. Journal of Econometrics, 169(2): 239–246.

Chalmers, Alan F. 1999. What Is This Thing Called Science? 3rd ed. Indianapolis, IN: Hackett.

Donald Cochrane and Guy H. Orcutt. 1949. Application of Least Squares Regression to Relationships Containing Auto-correlated Error Terms. Journal of the American Statistical Association, 44(245): 32-61.

Cox, David R. and David V. Hinkley. 1974. Theoretical Statistics. London: Chapman & Hall.

Cramer, Harald. 1946. Mathematical Methods of Statistics. Princeton, NJ: Princeton University Press.

David, Florence N. and Jerzy Neyman. 1938. Extension of the Markoff Theorem on Least Squares. Statistical Research Memoirs, 2: 105-116.

Davidson, James E.H., David F. Hendry, Frank Srba, and Steven J. Yeo. 1978. Econometric Modelling of the Aggregate Time-series Relationship Between Consumers’ Expenditure and Income in the United Kingdom. Economic Journal, 88(352): 661-692.

Desrosières, Alain. 1998. The Politics of Large Numbers: A History of Statistical Reasoning. Cambridge, MA: Harvard University Press.

Doornik, Jurgen A. 2009. Autometrics. In Jennifer L. Castle and Neil Shephard (2009), The Methodology and Practice of Econometrics: A Festschrift in Honour of David F. Hendry. Oxford: Oxford University Press, 88-121.

Durbin, James. 1960. The Fitting of Time-Series Models. Review of the International Statistical Institute, 28(3): 233-244.

Durbin, James. 1975. Tests of Model Specification Based on Residuals. In J.N. Srivastava (ed.), Survey of Statistical Design and Linear Models. Amsterdam: North Holland, 129-143.

Durbin, James and Geoffrey S. Watson. 1950. Testing for Serial Correlation in Least Squares Regression, I. Biometrika, 37(3/4): 409-428.

Durbin, James and Geoffrey S. Watson. 1951. Testing for Serial Correlation in Least Squares Regression, II. Biometrika, 38(1/2): 159-178.

Engle, Robert F., and Clive W.J. Granger. 1987. Cointegration and Error Correction: Representation, Estimation, and Testing. Econometrica, 55(2): 251-276.

Engle, Robert F., David F. Hendry, and Jean-François Richard. 1983. Exogeneity. Econometrica, 51(2): 277-304.

Ericsson, Neal R. 2004. The ET Interview: Professor David F. Hendry. Econometric Theory, 20(4): 743-804.

Faust, John and Charles H. Whiteman. 1997. General-to-Specific Procedures for Fitting Data-admissible, Theory-inspired, Congruent Parsimonious, Econompassing, Weakly-exogenous, Identified, Structural Model of the DGP: A Translation and Critique. Carnegie-Rochester Conference Series on Public Policy, 47: 121-161.

Frisch, Ragnar. 1934. Statistical Confluence Analysis by Means of Complete Regression Systems. Oslo: Univeritetets Okonomiske Institutt.

Gilbert, Chistopher L. 1986. Professor Hendry’s Econometric Methodology. Oxford Bulletin of Economics and Statistics, 48(3): 283-307.

Gilbert, Christopher L. 1989. LSE and the British Approach to Time Series Econometrics. Oxford Economic Papers, 41(1): 108-128.

Gilbert, Christopher L. 1991. Richard Stone, Demand Theory and the Emergence of Modern Econometrics. Economic Journal, 101(405): 288-302.

Goldberger, Arthur S. 1964. Econometric Theory. New York, NY: Wiley.

Granger, Clive W.J. 1981. Some Properties of Time Series Data and Their Use in Econometric Model Specification. Journal of Econometrics, 16(3): 121-130.

Granger, Clive W.J. and Paul Newbold. 1974. Spurious Regressions in Econometrics. Journal of Econometrics, 2(2): 111-120.

Granger, Clive W.J. (ed.). 1990. Modelling Economic Series: Readings in Econometric Methodology. Oxford: Oxford University Press.

Greene, William H. 2011. Econometric Analysis, 7th ed., Upper Saddle River, NJ: Prentice Hall.

Haavelmo, Trygve. 1940. The Inadequacy of Testing Dynamic Theory by Comparing Theoretical Solutions and Observed Cycles. Econometrica, 8(4): 312-321.

Haavelmo, Trygve. 1943a. The Statistical Implications of a System of Simultaneous Equations. Econometrica, 11(1): 1-12.

Haavelmo, Trygve. 1944. The Probability Approach in Econometrics. Econometrica, 12, supplement, 1-115.

Haavelmo, Trygve. 1947. Methods of Measuring the Marginal Propensity to Consume. Journal of the American Statistical Association, 42(237): 105-122.

Haavelmo, Trygve. 1958. The Role of the Econometrician in the Advancement of Economic Theory. Econometrica, 26(3): 351-357.

Hannan, Edward J. 1970. Multiple Time Series. New York, NY: Wiley.

Hansen, Bruce E. 1996. Review Article, ‘Econometrics: Alchemy or Science?’ by David Hendry. The Economic Journal, 106(438): 1398-1413.

Hansen, Bruce E. 1999. Discussion of ‘Data mining Reconsidered’. The Econometrics Journal, 2(2): 192-201.

Hendry, David F. 1971. Maximum Likelihood Estimation of Systems of Simultaneous Regression Equations with Errors Generated by a Vector Autoregressive Process. International Economic Review, 12(2): 257-272.

Hendry, David F. 1976. The Structure of Simultaneous Equations Estimators. Journal of Econometrics, 4(1): 51-88.

Hendry, David F. 1977. Comments on Granger-Newbold "On the Time Series Approach to Econometric Model Building" and Sargent-Sims "Business Cycle Modeling Without Pretending to Have Too Much A Priori Economic Theory". In Christopher A. Sims (ed.), NewMethodsinBusiness Cycle Research. Minneapolis, MN: Federal Reserve Bank of Minneapolis, 183-202.

Hendry, David F. 1980. Econometrics: Alchemy or Science? Economica, 47(188): 387-406; reprinted in Hendry, David F. 2000.

Hendry, David F. (ed.) 1986. Econometric Modelling with Cointegrated Variables. Special Issue. Oxford Bulletin of Economics and Statistics, 48(3).

Hendry, David F. 1987. Econometric methodology: A Personal Perspective. In Truman F. Bewley (ed.), Advancesin Econometrics: FifthWorldCongress, vol. 2. Cambridge: Cambridge University Press, 29-48.

Hendry, David F. 1995. Dynamic Econometrics. Oxford: Oxford University Press.

Hendry, David F. 2000. Econometrics: Alchemy or Science? Essays in Econometric Methodology. Oxford: Oxford University Press.

Hendry, David F. 2003. J. Denis Sargan and the Origins of LSE Econometric Methodology. Econometric Theory, 19(3): 457-480.

Hendry, David F. 2009. The Methodology of Empirical Econometric Modeling: Applied Econometrics through the Looking-glass. In Mills, T. C. and K. D. Patterson (eds.), Palgrave Handbook of Econometrics, Vol 2: Applied Econometrics. Basingstoke: Palgrave MacMillan, 3–67.

Hendry, David F., and Grayham E. Mizon. 1978. Serial Correlation as a Convenient Simplification, Not a Nuisance: A Comment on a Study of the Demand for Money by the Bank of England. Economic Journal, 88(351): 549-563.

Hendry, David F., and Jean-François Richard. 1982. On the Formulation of Empirical Models in Dynamic Econometrics. Journal of Econometrics, 20(1): 3-33.

Hendry, David F., and Jean-François Richard. 1989. Recent Developments in the Theory of Encompassing. In Bernard Cornet and Henry Tulkens (eds.), Contributions to Operations Research and Economics: The Twentieth Anniversary of CORE. Cambridge, MA: MIT Press, 393-440.

Hendry,David F., and Ken F. Wallis (eds). 1984. EconometricsandQuantitative Economics. Oxford: Blackwell.

Hood, William C. and Koopmans, Tjalling C. (eds). 1953. Studies in Econometric Method. Cowles Commission Monograph, No. 14. New York, NY: Wiley.

Hood, William C. and Koopmans, Tjalling C. (eds). 1953. Studies in Econometric Methods. Cowles Commission Monograph 14. New York, NY: Wiley.

Johansen, Soren. 1991. Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models. Econometrica, 59(6): 1551-1580.

Johansen, Soren. 1995. Likelihood-based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University Press.

Johnston, John 1963. Econometric Methods; 2nd ed. 1972. London: McGraw-Hill.

Judge, George C., William E. Griffiths, Carter R. Hill, Helmut Luthepohl, and Tsoung-Chao Lee. 1985. The Theory and Practice of Econometrics, 2nd ed. New York, NY: Wiley.

Kendall, Maurice G. 1953. The Analysis of Economic Time-Series, Part I: Prices. Journal of the Royal Statistical Society, A, 116(1): 11-34.

Kendall, Maurice G. and Alan Stuart. 1969. The Advanced Theory of Statistics, Volume 1: Distribution Theory, 3rd ed. London: Griffin.

Kendall, Maurice G. and Alan Stuart. 1973. The Advanced Theory of Statistics, Volume 2: Inference and Relationship, 3rd ed. London: Griffin.

Kendall, Maurice G. and Alan Stuart. 1968. The Advanced Theory of Statistics, Volume 3: Design and Analysis, and Time-Series, 2nd ed. London: Griffin.

Koopmans, Tjalling C. 1950. (editors), Statistical Inference in Dynamic Economic Models. Cowles Commission Monograph, No. 10. New York, NY:Wiley.

Kydland, Finn E. and Edward C. Prescott. 1991. The Econometrics of the General Equilibrium Approach to the Business Cycles. The Scandinavian Journal of Economics, 93(2): 161-178.

Leamer, Edward E., David F. Hendry and Dale Poirier. 1990). The ET dialogue: A Conversation on Econometric Methodology. Econometric Theory, 6(2): 171-261.

Maasoumi, Essie. 1988a. Denis Sargan and his Seminal Contributions to Economic and Econometric Theory. In Sargan (1988), volume 1, 1-18.

Maasoumi, Essie. 1988b. Contributions of Denis Sargan to the Theory of Finite Sample Distributions and Dynamic Econometric Models. In Sargan (1988), volume 2, 1-17.

Maddala, Gangadharrao S. 1977. Econometrics. London: McGraw-Hill.

Malinvaud, Edmond. 1966. Statistical Methods of Econometrics, 2nd ed. 1970. Amsterdam: North-Holland.

Marschak, Jacob. 1953. Economic Measurement for Policy and Prediction. In Hood and Koopmans (eds), 1-26.

Mayo, Deborah G. and Aris Spanos. 2004. Methodology in Practice: Statistical Misspecification Testing. Philosophy of Science, 71(5): 1007-1025.

Mayo, Deborah G. and Aris Spanos. 2006. Severe Testing as a Basic Concept in a Neyman-Pearson Philosophy of Induction. The British Journal for the Philosophy of Science, 57(2): 323-357.

Mayo, Deborah G. and Aris Spanos (eds). 2010. Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science. Cambridge: Cambridge University Press.

McGuirk, Anya and Aris Spanos. 2008. Revisiting Error Autocorrelation Correction: Common Factor Restrictions and Granger Non-Causality. Oxford Bulletin of Economics and Statistics, 71(2): 273-294.

Mizon, Grayham E. 1984. The Encompassing Approach to Econometrics. In Hendry and Wallis (eds), 135-172.

Mizon, Grayham E. 1977. Model Selection Procedures. In Michael J. Artis and Robert A. Nobay (eds.), Studies in Modern Economic Analysis, Oxford: Blackwell, 97-120.

Mizon, Grayham E. 1995a. Progressive Modeling of Macroeconomic Time Series: The LSE Methodology. In Kevin D. Hoover (ed.), Macroeconometrics: Developments, Tensions, and Prospects. Kluwer, Amsterdam, 107-170.

Mizon, Grayham E. 1995b. A Simple Message to Error-autocorrelation Correctors: Don’t! Journal of Econometrics, 69(1): 267-288.

Mizon, Grayham E. and Jean-François Richard. 1986. The Encompassing Principle and its Application to Testing Nonnested Hypotheses. Econometrica, 54(3): 657-678.

Morgan, Mary S. 1990. The History of Econometric Ideas. Cambridge: Cambridge University Press.

Neyman, Jerzy. 1952. Lectures and Conferences on Mathematical Statistics and Probability, 2nd ed. Washington DC: U.S. Department of Agriculture.

Pagan, A.R. 1984. Model Evaluation by Variable Addition. In David F. Hendry and Ken F. Wallis (eds), 103-133.

Pagan, A.R. 1987. Three Econometric Methodologies: A Critical Appraisal, Journal of Economic Surveys, 1: 3-24.

Phillips, William A. 1957. Stabilisation Policy and the Time-forms of Lagged Responses. Economic Journal, 67(266): 265-277.

Phillips, William A. 1958. The Relationship between Unemployment and the Rate of Change of Money Wages in the United Kingdom 1861-1957. Economica, 25(100): 283-299.

Phillips, Peter C.B. 1985. The ET interview: Professor Denis J. Sargan. Econometric Theory, 2(1): 119-139.

Phillips, Peter C. B. 1986. Understanding Spurious Regressions in Econometrics. Journal of Econometrics, 33(3): 311-340.

Phillips, Peter C.B. 1988. The ET Interview: Professor James Durbin. Econometric Theory, 4(1): 125-157.

Plackett, Robin L. 1949. A Historical Note on the Method of Least Squares. Biometrika, 36(3/4): 458-460.

Prescott, Edward C. 1986. Theory Ahead of Business Cycle Measurement. Federal Reserve Bank of Minneapolis, Quarterly Review, 10: 9-22.

Qin, Duo. 1993. The Formation of Econometrics: A Historical Perspective. Oxford: Clarendon Press.

Quenouille, Maurice H. 1957. The Analysis of Multiple Time-Series. London:Griffin.

Rao, Calyampudi R. 1973. Linear Statistical Inference and its Applications. New York, NY: Wiley.

Ricardo, David. 1817. Principles of Political Economy and Taxation, vol. 1 of The Collected Works of Davie Ricardo. Edited by Piero Sraffa and Maurice Dobb. Cambridge: Cambridge University Press.

Robbins, Lionel. 1935. An Essay on the Nature and Significance of Economic Science, 2nd ed. London: MacMillan.

Robbins, Lionel. 1971. Autobiography of an Economist. London: MacMillan.

Sargan, Denis J. 1957. The Danger of Over-Simplification. Bulletin of Oxford Institute of Statistics, 19(2): 171-178.

Sargan, Denis J. 1958. The Estimation of Economic Relationships Using Instrumental Variables. Econometrica, 26(3): 393-415.

Sargan, Denis J. 1959. The Estimation of Relationships with Autocorrelated Residuals by the Use of Instrumental Variables. Journal of the Royal Statistical Society, B, 21(1): 91-105.

Sargan, Denis J. 1964. Wages and Prices in the U.K.: A Study in Econometric Methodology. In Paul Hart, Gary Mills and John K. Whitaker (eds), Econometric Analysis for National Economic Planning, vol. 16 of Colston Papers. London: Butterworths, 25-54.

Sargan, Denis J. 1973. Model Building and Data Mining. Paper presented to the Association of University Teachers of Economics Meeting, Manchester. Published in Econometric Reviews (2001), 20(2): 159-170.

Sargan, Denis J. 1980. Some Tests of Dynamic Specification for a Single Equation. Econometrica, 48(4): 879-897.

Sargan, Denis J. 1981. The Choice Between Sets of Regressors, LSE manuscript. Published in Econometric Reviews (2001), 20(2): 171-186.

Sargan, Denis J. 1988a, Contributions to Econometrics. Essie Maasoumi (ed.), vols. 1 and 2. Cambridge: Cambridge University Press.

Sargan, Denis J. 1988b. Lectures on Advanced Econometric Theory. Oxford:Basil Blackwell.

Sargan, Denis J. 2003. The Development of Econometrics at LSE in the Last 30 Years. Econometric Theory, 19(3): 429-438.

Spanos, Aris. 1986. Statistical Foundations of Econometric Modelling. Cambridge: Cambridge University Press.

Spanos, Aris. 1988. Towards a Unifying Methodological Framework for Econometric Modelling. Economic Notes, 107-34. Reprinted in Granger (1990).

Spanos, Aris. 1989. On Rereading Haavelmo: A Retrospective View of Econometric Modeling. Econometric Theory, 5(3): 405-429.

Spanos, Aris. 1990. The Simultaneous Equations Model Revisited: Statistical Adequacy and Identification. Journal of Econometrics, 44(1-2): 87-108.

Spanos, Aris. 1994. On Modeling Heteroskedasticity: the Student’s t and Elliptical Linear Regression Models. Econometric Theory, 10(2): 286-315.

Spanos, Aris. 1995. On Theory Testing in Econometrics: Modeling with Nonexperimental Data. Journal of Econometrics, 67(1): 189-226.

Spanos, Aris. 1999. Probability Theory and Statistical Inference: Econometric Modeling with Observational Data. Cambridge: Cambridge University Press.

Spanos, Aris. 2000. Revisiting Data Mining: ‘Hunting’ With or Without a License. The Journal of Economic Methodology, 7(2): 231-264.

Spanos, Aris. 2006a. Where Do Statistical Models Come From? Revisiting the Problem of Specification. In Optimality: The Second Erich L. Lehmann Symposium, edited by Javier Rojo, Lecture Notes-Monograph Series, vol. 49. Beachwood, OH: Institute of Mathematical Statistics, 98-119.

Spanos, Aris. 2006b. Revisiting the Omitted Variables Argument: Substantive vs. Statistical Adequacy. Journal of Economic Methodology, 13(2): 179– 218.

Spanos, Aris. 2007. Curve-Fitting, the Reliability of Inductive Inference and the Error-Statistical Approach. Philosophy of Science, 74(5): 1046–1066.

Spanos, Aris. 2010a. Theory Testing in Economics and the Error Statistical Perspective. In Error and Inference, edited by Mayo, Deborah G. and Aris Spanos, 202-246.

Spanos, Aris. 2010b. Akaike-type Criteria and the Reliability of Inference: Model Selection vs. Statistical Model Specification. Journal of Econometrics, 158: 204-220.

Spanos, Aris. 2010c. Statistical Adequacy and the Trustworthiness of Empirical Evidence: Statistical vs. Substantive Information. Economic Modelling, 27(6): 1436–1452.

Spanos, Aris. 2012. Philosophy of Econometrics. In Philosophy of Economics, Uskali Maki (ed.). In the series Handbook of Philosophy of Science. Editors: Dov M. Gabbay, Paul Thagard, and John Woods. Amesterdam: Elsevier, 329-393.

Spanos, Aris. 2014. Revisiting Haavelmo’s Structural Econometrics: Bridging the Gap between Theory and Data. Forthcoming in the Journal of Economic Methodology.

Spanos, Aris, David F. Hendry and James J. Reade. 2008. Linear vs. LogLinear Unit-Root Specification: An Application of Mis-Specification Encompassing. Oxford Bulletin of Economics and Statistics, 70(S1): 829-847.

Spanos, Aris and Anya McGuirk. 2001. The Model Specification Problem from a Probabilistic Reduction Perspective. Journal of the American Agricultural Association, 83(5): 1168-1176.

Spanos, Aris and James J. Reade. 2014. Heteroskedasticity Consistent Standard Errors and the Reliability of Inference Revisited. Virginia Tech Working paper.

Theil, Henry. 1971. Principles of Econometrics. New York, NY: Wiley.

Wooldridge, Jeffrey M. 1998. Review of ‘Dynamic Econometrics’ by David Hendry. Economica, 65(258): 296-298.

Wooldridge, Jeffrey M. 2012. Introductory Econometrics: A Modern Approach, 5th ed. OH: Thomson.

Yule, Udny G. 1926. Why Do We Sometimes Get Nonsense Correlations Between Time Series - A Study in Sampling and the Nature of Time Series. Journal of the Royal Statistical Society, 89(1): 1-64.

Top of page

Notes

1 This dating is based on the presence of the main protagonist, Denis Sargan, as a faculty member at the LSE; he arrived at the LSE in 1963 and retired in 1984. The other main protagonist, David Hendry, became faculty at the LSE in 1970 and left in 1982. Singling out Sargan and Hendry is not meant to lessen the role of other contributors like Grayham Mizon and Jean-François Richard to the LSE tradition.

Top of page

References

Bibliographical reference

Aris Spanos, “Reflections on the LSE Tradition in Econometrics: a Student’s Perspective”Œconomia, 4-3 | 2014, 343-380.

Electronic reference

Aris Spanos, “Reflections on the LSE Tradition in Econometrics: a Student’s Perspective”Œconomia [Online], 4-3 | 2014, Online since 01 September 2014, connection on 29 March 2024. URL: http://journals.openedition.org/oeconomia/922; DOI: https://doi.org/10.4000/oeconomia.922

Top of page

About the author

Aris Spanos

Department of Economics, Virginia Tech, Blacksburg, VA 24061. aris@vt.edu

By this author

Top of page

Copyright

CC-BY-NC-ND-4.0

The text only may be used under licence CC BY-NC-ND 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search