In 1870, Great Britain was the epicenter of the economic world. Heart of both a transoceanic empire and the first globe-spanning economy, London sang with the clamor of finance, art, and exchange. The smokestacks of Manchester still poured filth into leaden skies, but now the workmen and women who toiled beneath them were sharing in the long-promised prosperity of the industrial age. By the last quarter of the nineteenth century, however, British writers were conscious that their hard-won economic pre-eminence was slipping. The periodicals of the day were filled with vague fears and premonition of decline and supersession, replete with wary glances across the Channel and over the Atlantic at the rise of America and Germany. Two strands of metaphor were, as ever, most common: military and life-cycle. There was talk of “commercial invasion” and “defeat,” and simultaneously of the eclipse of a senescent Britain by “youthful” industrial powers. Observers watched the statistics with dismay, especially the falling trade figures and the inexorable rise of foreign production in chemicals and steel, and demanded answers. Was the inexorable inevitable, and who or what was to blame? Were conservative English entrepreneurs and managers failing to adopt the latest techniques, or was decline the necessary fate of a little island striving to compete with continent-sized economies? These fears were baked into the economic analyses of the day, and soon enough into the discipline of economic history rising at the turn of the century.
Alfred Marshall, doyen of English economics, wrote in 1903:
Sixty years ago England had leadership in most branches of industry. It was inevitable that she should cede much to the great land which attracts alert minds of all nations to sharpen their inventive and resourceful faculties by impact on one another. It was inevitable that she should yield a little of it to that land of great industrial traditions which yoked science in the service of man with unrivalled energy. It was not inevitable that she should lose so much of it as she has done.
Marshall and company were right: Britain was in decline. The latest (2020) output reconstructions show that growth of real GDP per capita tumbled from 2.06 percent per annum over 1856-1873 to 1.18 percent in 1873-1899, and then further to .84 percent until the outbreak of the First World War. TFP growth in 1899-1913 was less than a third of that achieved in 1856-73. Economic historians have debated the periodization and even the existence of the “climacteric,” but the evidence in both cases is now quite incontrovertible. The year 1870 was a turning point, the late-Victorian end to the dynamic advance of the Industrial Revolution. The answer to the question of the Victorian and Edwardian writers—why did Britain lose her industrial supremacy—is much less clear. Discussion of the problem was as much a fixture in contemporary policy circles as it was in 1970s America or as it is again today, but the academic contest exploded into life just before the Second World War, when studies by Hoffman and Schlote revealed that output and export growth had begun to slide during the 1870s.
The first generation of decline theories understandably attempted to exploit the output-export correlation. The Keynesian Revolution was underway; and since aggregate demand was thought to determine the level of production, economic historians were drawn to the conclusion that flagging foreign sales were to blame. “[I]ncrements of industrial exports should increase the value of total production by more than their own value,” wrote J. R. Meyer (1955), so “if the rate of growth in industrial exports had been maintained, the United Kingdom could have sustained its former high-level advance in industrial production.” But a flurry of other possibilities were entertained. Aldcroft (1964) was just one of many studies citing entrepreneurial failure, the hypothesis famously adopted by David Landes in The Unbound Prometheus to explain Britain’s sluggish expansion. Landes had noted “the importance of this human factor—the success of entrepreneurial and technological creativity on one side, the failure on the other” in marking out the difference. Aldcroft specifically blamed “irrational” decisions not to adopt best-practice techniques, such as ring spinning and automatic weaving in the cotton industry and the mechanical cutter in coal mines, underinvestment in laboratories and research personnel, and insufficient aggression in foreign markets. Others cited capital market failures that led to excessive investment abroad, sub-optimal R&D spending, and over-commitment to the old industries that had served Britain so well over the preceding century.
In 1970, Deirdre McCloskey shot back at the declinists. Her incisive paper “Did Victorian Britain Fail?” answers its titular question in the negative, contending that the country was “growing as rapidly as permitted by the growth of its resources and the effective exploitation of the available technology.” She rebutted the foreign investment view by demonstrating that, even assuming unrealistically drastic imperfections in capital markets, the efficient deployment of investment would have increased income over 1870-1913 by only 7.3 percent—barely nudging up the growth rate. She also asserted (more forcefully in a paper the following year) that British firms did make efficient technical choices, as the market environment that they faced was sufficiently competitive to force out laggards. Against Meyer’s export hypothesis, McCloskey finally concluded that “binding resource limitations” made the entire notion invalid. Late Victorian Britain lacked an “industrial reserve army of the unemployed” and stores of capital locked up in profitless foreign deposits that could be have been used to build up new industrial capacity; the economy was on its production possibility frontier. “Had exports grown faster,” wrote McCloskey, “output for domestic use would have grown slower: the total was fixed by the growth of resources and productivity.”
Subsequent research has vindicated some aspects of McCloskey’s attack. While many of the early industry studies seemed to qualitatively confirm the original view of entrepreneurial failure, microeconomic critiques like that of Leunig (1996) have shown that even the central case—the persistence of mule spinning after ring-spinning had emerged in America—was actually a rational response to factor prices. Though the ring-spindle was “technically viable” for coarse yarns by 1880, Lancashire had access to cheap supplies of skilled mule spinners, lowering the potential cost-savings from adopting the new invention by comparison with New England. As a result, 25 percent of British spindles (against 90 percent American) were of the ring variety in 1913. Leunig also found that factor costs explained variations in adoption within British industry—in regions where it was profitable to do so, firms made the profit-maximizing technical choice. Moreover, Lancashire mule spinners were 10 percent more productive than New England ring-spinners. There was no crisis of entrepreneurial conservatism.
But McCloskey’s denial of a late-Victorian productivity slowdown, as we’ve seen, holds little water. So if export demand and entrepreneurship didn’t cause British decline, what did? Was it a legacy of Britain’s technological advances? Phelps-Brown and Handfield-Jones (1952) argued that the late Victorian slowdown was the hiatus between general-purpose technologies—the transition between the dying age of steam and the rising age of electricity. They wrote that their “main explanation of the check to the rise of real income in the UK about the end of the 19th century is that the previous rise had been carried forward by the massive application of steam and steel, which had not much scope for extension; while the new techniques, especially of electricity, the internal combustion engine, and the new chemical processes did not attain massive application until during and after the First World War.” But the timing of this provocative structural hypothesis is off. The contribution of steam power to economic growth was actually rising while the economy declined, up from .41 percent per year in 1850-1870 to .51 percent during the subsequent four decades. Steam power driving mechanization at a “tremendous pace” circa 1870, thanks in part to the slow original development of the technology. If there was a gap between steam and electricity, it came too late to explain British stagnation.
Others have proposed that Britain’s economy was held back by “institutional rigidities” that delayed the transition to science-based corporate management. After all, nineteenth-century British industry continued to be characterized by “family capitalism,” not the large-scale intensive use of low-skill labor prevalent in the United States. Craft control of shop floors persisted in Britain long after the “American system” had eliminated it. Though the practice had helped with initial technological adoption in the face of “appropriation problems” by mitigating the opportunism of younger workers, it proved debilitating at the end of the nineteenth century. As a consequence, British manufacturers were unable to press workers for increases in effort and lacked the incentive to invest in technologies with high sunk costs. Firms that did make such investments, such as the auto industry’s Fordist assembly lines, were thereby exposed to “hold-up problems” through the possibility of prolonged contract negotiations. The restricted size of the British market reduced the payoffs to employers of fighting craft union control by comparison with the American, where high sunk cost technologies were needed to make long production runs.
Britain’s critics have frequently highlighted deficient investment in scientific and technical education, R&D, and industrial research facilities. The First Industrial Revolution had been “made” by tinkerers and “practical men” with little formal schooling, apparently making business and national elites wary of “scientific management” and basic research. Historians, prompted by the contemporary angst of leading scientists, scholars, and pundits, point to the education systems of America and Germany as the path to the future missed by the British in their laissez-faire complacency. Warwick (1982) characteristically writes: “Well-known scientists, distinguished educators, leading industrialists and Royal Commissions all inveighed against the complacency of British industry and the inadequacy of Britain’s excessively humanistic and overly decentralized educational systems, but to little avail. British businessmen persisted in taking the short-term view of things, failing to take account of how technological advances in the future could affect their enterprises… displaying behaviour consistent with the conviction that change was neither necessary nor particularly desirable.” But industrial research laboratories and higher education made little difference to American growth until after the First World War, forty years after the British slowdown. Furthermore, Britain was making significant investments in technical human capital by the end of the nineteenth century, having improved appreciably over the previous century. Slow growth occurred despite, not in the absence of, increased human capital accumulation.
Market size, resources, production environment were almost certainly critical factors. Across the Atlantic, the “breakthrough technologies” of the Second Industrial Revolution were being forged amid abundant supplies of industrial raw materials, particularly minerals. Gavin Wright (1990) observed that America’s comparative advantage lay in natural resource intensity, which complemented the large production runs and standardization that served her populous domestic market. In 1913, the United States produced 95 percent of the world’s natural gas, 65 percent of its petroleum, 39 percent of its coal, and 36 percent of its iron ore—the world leader in each category. Ford UK bought steel for 50 percent more than its American counterpart. The “American system” of high-throughput mass production used fuel and raw materials in enormous quantities relative to labor and plant because they were cheaper, and these relative factor prices determined the path of innovation. Biased technological change led to the adoption of methods that used the country’s resource base intensively—methods which were obviously unprofitable in Britain. As the frontier in innovation shifted overseas, the best-practice techniques increasingly became unsuited for use in the British market. “Learning possibilities” from new technologies were precluded at the same time as they were exhausted with the old, and the consequences were disastrous.
Nicholas Crafts observes that growth during the “climacteric” was not “too disappointing”; after all, it was faster than at any point during the classic Industrial. Revolution. But Britain’s inability to maintain her mid-century rate of productivity advance is no statistical illusion, and requires explanation—especially in the context of her subsequent failure to achieve rates of modern economic growth. In the end, it seems that Britain was at least in part the victim of the environment that had produced the First Industrial Revolution. It was not the early start, necessarily, but the conditions that had made it possible that ironically reduced Britain’s apparently insurmountable lead.
Illuminating read, the tweet storm pretty much trounced a mystery novel as it unfolded. Am curious - about the impact of the British colonial economy on the economy in Britain.
I doubt the productivity slowdown; reconstructions of British, American, and German relative per capita income in 1913 show only very modest changes relative to 1870. Germany and the U.S. did have faster population growth.