'Bretton Woods II', 'Inflation targeting', A Critique of Crisis Theory, Dollar System, Dollar-Gold Exchange System, Federal Reserve, largest creditor to world’s largest debtor, Nixon’s August 1971 move, Sam Williams, springtime for the money capitalists, The limits to dollar creation, U.S. trade deficit, world trade
By Sam Williams
From the Dollar-Gold Exchange System to the Dollar System
The Bretton Woods dollar-gold exchange standard began to unravel with the collapse of the gold pool in March 1968 and collapsed completely in August 1971, when Nixon formally ended the convertibility into gold of the U.S. dollar by foreign governments and central banks. The U.S. dollar, even dollars in the central banks or treasuries of foreign governments, was now a purely token currency and no longer a form of credit money. From now on, the dollar would follow the laws of token money, not credit money.
The question posed by Nixon’s August 1971 move was whether the U.S. dollar could maintain its position as the main world currency now that it was a token currency and not credit money. As long as the dollar had retained its convertibility into gold at a fixed rate by foreign central banks and treasuries—which also meant that the open market dollar price of gold could not move very far from the official $35 an ounce—commodity prices quoted in dollars and international debts denominated in dollars were in effect quoted and denominated in terms of definite quantities of gold.
But with the transformation of the dollar into token money, this was no longer true. The dollar no longer represented a fixed quantity of gold but a variable quantity. Its gold value could change drastically over a short period of time.
Under these conditions, could the U.S. dollar continue to function as the chief world reserve currency? Remember, the chief reserve currency is the currency in which the prices of internationally traded commodities such as oil are quoted, and in which international debts are consequently denominated. Because of this, governments, central banks and corporations that operate on an international scale are obliged to hold dollars as a reserve fund.
If the governments of the countries that produced primary commodities traded on the world market had reacted to Nixon’s move by starting to quote the prices of their commodities directly in gold rather than in dollars, Nixon’s move would have failed, and the United States would have been under great pressure to quickly return the dollar to gold convertibility. If internationally traded commodity prices began to be directly quoted in gold, international debts would have been increasingly denominated in gold as well, and governments, central banks and corporations would have shifted their reserve funds from dollars to gold.
If the United States had, all the same, gone ahead with the devaluation of the dollar, this would have meant that in terms of dollars, international debts of the U.S. government and U.S. corporations denominated in terms of gold would have automatically increased as the dollar price of gold rose. Under these conditions, a major fall in the dollar would have meant a growing wave of bankruptcies among U.S. corporations.
However, bankruptcy is a legal device enforced by the state power. What foreign state power in the world was strong enough to force U.S. corporations into bankruptcy and seize their assets? Weren’t virtually all states in the capitalist world either semi-colonies, neo-colonies, or at best satellites of the United States?
This is what the Nixon administration counted on when it ended the dollar’s gold convertibility. Nixon and his advisors assumed, correctly as it turned out, that no government would dare challenge the United States by having the internationally traded commodities produced in its territories quoted in gold rather than dollars. The dollar-gold exchange standard, which in one form or another had dominated the international monetary system since the end of World War I, was now transformed into the dollar standard.
Under the dollar standard, the United States—both at the government and corporate levels—can pay most of its debts in U.S. dollars. If the weight of the dollar debts threatens to bankrupt U.S. corporations or state and local governments, the U.S. Federal Reserve System can simply devalue the dollar—that is, let the dollar price of gold rise, as well as allow the exchange rate of the dollar to fall against foreign currencies—and the debt is lessened. And of course the U.S. Federal Reserve System, which in effect prints dollars, is at the service of both the U.S. government and the big U.S. corporations.
The limits to dollar creation
This doesn’t mean, however, that the United States can simply create any amount of dollars it wants, any more than a government that prints its own paper money can meet its expenses simply by printing paper money rather than levy taxes. If a government attempted to finance itself by simply printing paper money, the individual capitalists, including corporate collective capitalists, would be unwilling to accept such a currency in return for their commodities. Therefore, even a government that prints “legal tender” paper money is forced to levy taxes.
The United States, though it can pay its foreign debts in a currency it prints itself, still has to show some restraint if the capitalists of the world, including the U.S. capitalists, are to continue to accept the dollar in exchange for commodities and as a medium for payment of debt. Just like a government cannot finance its expenditures by simply printing paper money as opposed to levying taxes, even under the dollar standard the United States cannot, for example, cease to export and only import. (1)
However, the dollar standard does give the United States the ability to run balance of trade deficits that are far larger than those of any other country. The limit is that when the trade deficits get too large, the dollar starts to fall rapidly against both gold and other currencies. Once that point is reached, inflation and interest rates start to climb quickly, undermining the domestic U.S. and world currency and credit system.
That is exactly what we saw in 1973-74 and 1979-80. When that point is reached, a severe recession soon follows that again reduces the U.S. trade deficit to the level the world market is willing to tolerate. How large the U.S. trade deficit can get depends on the level of interest rates and the general credibility of the dollar on one side and the level of world gold production on the other.
Both U.S. interest rates that are “too low” from the viewpoint of the money capitalists, and declining gold production that undercuts the U.S. ability to run large trade deficits, limit the dollar system. On the other hand, rising world gold production, all other things remaining equal, mean that the United States can run a larger trade deficit.
The chronic though fluctuating U.S. trade deficit under the dollar system means that the United States enjoys a higher standard of living than its actual share of global production would otherwise allow. Whenever a dollar devaluation or sharper dollar crisis occurs, the fall in the trade deficit that follows inevitably means a drop in the standard of living of a considerable section of the U.S. population—for example, homeowners who can borrow against their home equity when the dollar is strong and credit is easy but lose that ability when a dollar decline leads to tighter credit. On the other hand, whenever the U.S. dollar gets stronger, leading to a higher trade deficit, the standard of living of a considerable section of the U.S. population—especially homeowners—rises.
These laws of the dollar system are illustrated by the evolution of the U.S. balance of trade deficit from 1971 onward.
During the 1970s, the dollar was a chronically “weak” currency—that is, the dollar price of gold despite some fluctuations was rising strongly, and the exchange rate of the dollar against most foreign currencies was falling—and therefore the U.S. balance of trade deficit could not grow very large. Between 1971 and the Volcker shock, the annual U.S. trade deficit remained below $30,000 million. Indeed, after the dollar crisis of 1973-74 led to the severe recession of 1974-75, the U.S. balance of trade swung into a short-lived surplus in 1975.
Because the trade deficit was still modest during the 1970s, the massive creditor position that the United States had been accumulating from World War I until the end of the 1960s began to erode gradually. However, overall through the prolonged economic crisis of 1968-1982 and for some years beyond the United States remained a creditor nation—the amount of money that was owed to it was much greater than the debts that it owed to foreigners.
After the Volcker shock, however, the dollar was stabilized within broad limits. While there were “bull markets” in gold, when the dollar price of gold rose, these were more than balanced off by “bear markets” in gold, when the dollar price of gold fell. Between the Volcker shock and the turn of the century, the dollar was on a secular rising trend against gold.
The United States could claim that the dollar was actually better than gold because not only did dollar-denominated securities such as Treasury notes bear interest, but even a miser who hoarded “green dollar” bills, which do not yield any interest, would be worth more in terms of gold than a miser who hoarded gold bullion. At the turn of the century, each dollar represented more gold than it had some 20 years earlier.
So the U.S. Federal Reserve System not only brought an end to the dollar’s rapid decline between 1968 and the Volcker shock but actually followed a monetary policy that allowed a gradual rise in the gold value of the dollar. The Federal Reserve was able to follow such a monetary policy, without having to face disastrous deflationary “panics” because global gold production was in a strong upswing during those years. This upswing in gold production, in turn, reflected the low level of prices relative to values at the time of the Volcker shock.
‘Bretton Woods II’
This arrangement is sometimes called “Bretton Woods II.” Unlike Bretton Woods I, there was no public international conference that brought it into effect. But there was at least a tactical agreement—there may have been secret agreements as well—to the effect that the United States would prevent any further devaluation of the dollar but instead would encourage its gradual appreciation.
In exchange, the other capitalist governments and central banks would gradually sell off their remaining gold reserves and replace them with dollars reserves. This worked well for the central banks—as long as the dollar was stable or rising against gold. The gold sales by the central banks—and the International Monetary Fund—along with rising world gold production further strengthened the dollar.
Personality cults around the U.S. Federal Reserve System chief
A curious feature of Bretton Woods II was the building up of something of a personality cult around the head of the U.S. Federal Reserve System. Traditionally, central bankers had been gray figures working in the background, little known to the general public.
But the U.S. media went on a campaign of building what amounted to a personality cult around the reigning head of the Federal Reserve System beginning with Paul Volcker. The media claimed that the “Volcker standard” was better than the gold standard, because Volcker was a “genius.”
When Volcker stepped down as Fed chief in 1987 in favor of Alan Greenspan, the “personality cult” around Greenspan reached even greater levels than the one built up around Volcker. This was because Greenspan had far less authority in international financial circles than Volcker, who had, after all, actually stabilized the dollar. Volcker was and is highly respected in international financial circles. That is why he is a senior economic advisor to President Barack Obama.
Greenspan, a mediocre economist at best—even by the dismal standards of the modern bourgeois economics profession—was hailed as some kind of super-genius. (2) The “Greenspan standard” was proclaimed as something much better than the gold standard.
These strange “personality cults” were not accidents but rather attempts to convince central banks, governments, foreign and U.S. corporations and money capitalists of all nationalities to hold their reserves in U.S. dollars rather than in other currencies, and especially gold. It was claimed that gold was now almost completely “de-monetized” and essentially irrelevant to the international monetary system. The U.S. dollar and not gold was proclaimed by economists—and even many Marxists accepted this—to be the measure of value of commodities.
Of course, Volcker and still less Greenspan were not super-geniuses. And even if they had been, they would not have been able to override the basic economic laws that govern the capitalist system.
The Bretton Woods II international monetary system was able to function for awhile only to the extent allowed by the basic economic laws Marx had explained in “Capital” and his other works.
Springtime for the money capitalists
At the dawn of Bretton Woods II, the combination of the beginning of the stabilization of the dollar—the growing probability that it would not lose any more gold value for many years to come—and the very high rate of interest that prevailed at the beginning of Bretton Woods II made the world’s money capitalists eager to buy high-yielding securities. Or what comes to exactly the same thing, they were eager to purchase securities at sharply depreciated prices. (3)
Money capitalists eagerly rushed into bonds that bore interest rates in excess of 10 percent—even on “risk free” U.S. government securities. (4) The same was true of severely depreciated corporate stocks that promised high yield and huge capital gains in the years ahead as profits and dividends soared and interest rates gradually returned to normal levels.
If you were a money capitalist, the opportunities for enrichment at relatively little risk had never been greater. After the long winter of 1968-1981, it was truly springtime for the money capitalists. The greatest “bull market” in both stocks and bonds in history was underway.
‘Inflation targeting’ dooms Bretton Woods II to eventual collapse
However, just like its predecessor, Bretton Woods II was doomed to collapse sooner or later. As I explained in last week’s post, the bourgeois “macro-economists” had concluded that the high rate of inflation that had characterized the 1970s must be avoided in the future. And they pretty much decided that it was utopian to try to stamp out the “business cycle.”
The macro-economists explained that the “business cycle” was based on unchanging human nature with its swings of excessive optimism and excessive pessimism. (5) In the future, they concluded, the central banks and governments should confine themselves to moderating the business cycle in order to prevent either the inflationary crises of the 1970s or the classic panics and deep depressions of old.
At the same time, the post-Volcker shock bourgeois macro-economists, whether the Friedmanites—the dominate trend—or “moderate Keynesians,” clung to the idea that the general price level should never be allowed to fall. Instead, the widely supported consensus among bourgeois economists held, prices must be allowed to rise at about 1 percent to 3 percent. If the rate of price increases fell below 1 percent, this would be dangerous. Why?
Suppose, these bourgeois economists argued, if prices were increasing at a rate of less than 1 percent and a recession broke out, horrors of horrors, the cost of living might actually start to decline! And if that happened, buyers would tend to put off purchases hoping for still lower prices, and a prolonged and possibly severe depression could set in.
On the other hand, a rate of inflation greater than 3 percent could degenerate into a 1970s-type stagflation, which should be avoided as well. This policy has been dubbed “inflation targeting.”
Inflation targeting has been adopted as the official policy of the new European Central Bank, and is known to be strongly favored by current Federal Reserve chief Ben Bernanke. Under inflation targeting, if the rate of inflation threatens to drop below 1 percent, the central bank will move to cut interest rates. If prices threaten to rise more than 3 percent, the central bank will move to raise interest rates.
Bretton Woods II could work for a certain period of time
Just like the original Bretton Woods System benefited from the low prices relative to underlying labor values of the Depression, so Bretton Woods II benefited from the very low prices of commodities in terms of gold that followed the 1968-81 protracted dollar crisis. The low level of prices relative to labor values was confirmed by the strong upward trend in world gold production between the early 1980s and the turn of the century.
With increasing amounts of newly mined gold hitting the market, the U.S. Federal Reserve Board and the other central banks had far more freedom to follow “expansionary policies” without seeing the price of gold start to rise in terms of the currency they were issuing. As a result, interest rates could gradually decline without the currency price of gold—especially the dollar price of gold—resuming its upward movement.
Like during the early years of Bretton Woods I, the central bankers claimed that they had discovered how to keep prices on a gradual but non-accelerating upward path that would guarantee permanent prosperity, interrupted only by relatively infrequent and “mild” recessions.
But a “creeping inflation” of 1 to 3 percent, according to official cost of living figures, meant that in terms of gold—real money—commodity prices were again rising, much like they had during the first Bretton Woods era and during earlier eras of capitalist prosperity. This meant that sooner or later, the general price level would rise substantially above the value of commodities and gold production would again begin to decline.
When the moment arrived, as it did in the first decade of 21st century, central banks would face exactly the same dilemma they faced in the late 1960s.
They could allow nominal prices to fall. This would save Bretton Woods II and would enable interest rates to fall to low levels and stay there for a prolonged period. This would encourage an eventually recovery in the profit of enterprise and an era of much more rapid economic growth. Such an era would resemble the early post-World War II era as opposed to the semi-stagnation of the “Great Moderation.” But the price to be paid would be financial panic followed by a prolonged period of deep depression.
If instead they did everything in their power to keep prices from falling—which is what the overwhelming majority of bourgeois economists recommended—the U.S. dollar and other currencies would experience a wave of devaluations that would end in a new wave of severe inflation and soaring interest rates. After a second “stagflation” episode, it might not be so easy to establish “Bretton Woods III.” Perhaps the market would demand nothing less than a full restoration of the gold standard.
This forms the backdrop to the panic of 2007-09, which will be the subject of next week’s post.
Bretton Woods II and world trade
As the dollar was stabilized during the Volcker shock, foreign capital began to pour into dollar-denominated assets. Money capitalists were eager to take advantage of the sky-high interests rates. This drove up the dollar against both gold and most other currencies. The strong demand for dollars enabled the U.S. trade deficit to soar. The United States could now run large deficits in its balance of trade without this leading to a renewed depreciation of the dollar and resulting skyrocketing inflation and soaring interest rates.
The U.S. balance of trade—from early postwar surplus to chronic deficit
By 1971, the traditional surplus in the U.S. balance of trade, which had reflected the overwhelming competitive advantage of the mighty U.S. industrial economy that had arisen during the late 19th and early 20th centuries, had vanished. The decline and eventually the end of the U.S. trade surpluses reflected the much more rapid growth of West European and Japanese capitalism after World War II.
Early in the postwar period, the wages of workers in Western Europe were low. Two devastating world wars, runaway inflation in some countries, the Depression, and the fascist and military dictatorships had crushed the workers’ movements in many countries after World War I. This had allowed the rate of surplus value to soar, lifting the rate of profit.
In contrast, the United States had seen the rise of CIO industrial unions and New Deal reforms in the period between the wars. These developments put downward pressure on the rate of surplus value within the United States. After World War II, capital, always looking for the highest rate of surplus value and profit, flowed from the United States to Western Europe and Japan, where the rate of profit was higher. This was further encouraged by the decision of the United States to open up its home market to foreign competition that I examined in earlier posts.
The many new factories built in Western Europe and Japan after World War II could use the state-of-the-art technology of the time. Within a few decades, Western European and Japanese factories were often more advanced and could often produce commodities of higher quality at lower prices than the aging factories of the United States and Britain. The result was a falling and then in 1971—the year that the dollar-gold exchange system gave way to the dollar system—a vanishing U.S. trade surplus.
From 1971 onward, the “normal” situation was for the United States to run a foreign trade deficit. The one exception was the recession-depression year of 1975. As I explained in an earlier post, one of the purposes of crises of overproduction is to bring world trade back into equilibrium. This means that countries running balance of trade surpluses will tend to see these surpluses disappear during overproduction crises, while those countries running deficits will tend to see the deficits shrink or even turn into surpluses. The U.S. balance of trade surplus during the recession-depression year of 1975 illustrates this law.
The United States ran a balance of trade deficit of $1,302 million dollars in 1971. With the exception of the short-lived surplus in 1975, the United States ran relatively modest trade and balance of payments deficits during the remainder of the 1970s. (U.S. Census Bureau, Foreign Trade Division)
In the years immediately preceding the Volcker shock, the United States was fighting the trend toward higher interest rates by creating more and more token money. As a result, the U.S. economy resembled a balloon that was gradually leaking money. The resulting march toward higher and higher interest rates came to a climax with the Volcker shock.
As I explained last week, the extremely high interest rates of this period, which more less wiped out the profit of enterprise, encouraged the decay of industrial enterprises through most of the capitalist world. But the trend toward “de-industrialization” was even more pronounced in the United States than it was in Western Europe and Japan—though it became more pronounced in Japan after the 1989-90 economic crash in that country.
Both the dollar and U.S. trade deficit soar
As the dollar soared against both gold and other currencies, the ability of the United States to run a trade deficit increased. In 1981, during the Volcker shock, the United States ran a trade deficit of $16,172 million. By 1987, this had soared to $151,684 million. By the mid 1980s, though it was not in an official National Bureau of Economic Research “contraction,” the U.S. economy experienced something of a recession as its home market was flooded by cheap commodities from abroad. The age of “de-industrialization” and corporate “downsizing” had begun.
From world’s largest creditor to world’s largest debtor
After years of weakness, the suddenly strong U.S. dollar was dubbed the “super-dollar.” The soaring U.S. trade deficits of the “super-dollar” years also meant, for reasons that I described above, a rapid accumulation of the debt that the U.S corporations, private individuals, and U.S. government owed to foreign capitalists. The United States was rapidly running down its creditor position.
By the mid or late 1980s, the U.S. creditor position that had first arisen during World War I had become a debtor position. Almost overnight, the United States passed from the world’s largest creditor to the world’s largest debtor.
The sudden vastly increased ability of the United States and the U.S. government to borrow encouraged the Reagan administration to pass a huge regressive tax cut and at the same time greatly increase the level of military spending.
The increased military spending was specifically designed to encourage the growing capitulationist wing of the ruling Communist Party of the Soviet Union. Mikhail Gorbachev, who became general secretary of the Central Committee of the CPSU in March 1985, represented this grouping.
Gorbachev and his supporters argued that, given the falling price of oil that followed the Volcker shock, on one side, and the skyrocketing level of U.S. military spending under Reagan, on the other, there was “no alternative” but to capitulate to all U.S. demands.
The Plaza Accord devalues the dollar
By 1985, the worsening trade situation and the growing “de-industrialization” of the U.S. economy clearly alarmed the Reagan administration. In September 1985, representatives of the United States and its satellite imperialists met at the Plaza Hotel in New York and agreed to devalue the dollar. This was reflected subsequently in both a rise in the dollar price of gold as well as a drop in the dollar exchange rates against other currencies.
For example, in early 1985 at the peak of “super-dollar” strength the dollar price of gold fell briefly below $285 an ounce. At the end 1987, in the wake of the 1987 stock market crash, the dollar price of gold rose briefly to $485 an ounce. This was the highest dollar price of gold—or lowest gold value of the dollar—between the Volcker shock and the turn of the century.
There was no repeat of the dollar plunge of the 1970s, however. Instead, there was a limited and controlled devaluation of the dollar that was carefully managed by the central banks. Even this controlled and temporary devaluation of the U.S. dollar put a certain strain on Bretton Woods II. However, Bretton Woods II was able to survive it because of both its limited and temporary nature. Within a few years, the dollar was to renew its rise.
This dollar devaluation did lead to both an official NBER contraction in 1990-91—considerably milder than those that followed the dollar “panics” of 1973-4 and 1979-80—and a declining U.S. trade deficit. From a peak of $151,684 million in 1987, the U.S. trade deficit declined to $31,135 million during the recession year of 1991. The U.S. balance of payments on current account even went into a brief surplus in 1991 as result of checks written by the U.S. satellite imperialist countries to pay for the Gulf War of aggression against Iraq in 1991.
This temporary decline in the U.S. trade deficit encouraged some U.S. economists to proclaim the approaching end of the deficit altogether. “The Commerce Department,” Jonathan Peterson wrote in the August 28, 1991, edition of the Los Angeles Times, “said Tuesday that the gap between exports and imports declined to $15.6 billion in the second quarter to the lowest total in eight years.”
“As a result”, Peterson explained, “some optimists boldly forecast an end to the deficit that once rattled the financial world, with Americans again selling as much to other nations as they buy from them.
“‘It’s absolutely going to happen,’ declared Ken Goldstein, an economist at the Conference Board, a business research organization in New York. ‘The question is will it happen as early as 1992 or 1993 or 1994?’”
Something Mr. Goldstein overlooked was that one of the reasons why the U.S. trade deficit was so low during 1991 was the worldwide economic recession. It was natural that countries that had a chronic tendency toward a trade deficit, like the United State did under the dollar standard, would experience a decline in the deficit during a cyclical recession.
One the functions of crises of general overproduction—recessions—is, after all, to move world trade back towards balance through the contraction of international credit. But as the world economy pulled out the recession of the early 1990s, the U.S. balance of trade deficit would be expected—Goldstein notwithstanding—to start to rise once again. (6)
Other factors were working toward a renewed growth in the U.S. balance of trade deficit. One was the ongoing destruction of the Soviet Union and its economy under Mikhail Gorbachev. In its final convulsions, the Gorbachev regime dumped large amounts of gold on the world market. This helped drive down the dollar price of gold.
Partly for this reason, and more fundamentally due to the rising production of gold, the dollar price of gold, which briefly rose above $400 an ounce on the eve of the U.S. attack against Iraq, then fell during the rest of the year to below $354 an ounce by the end of 1991.
By 1993, the U.S. trade deficit had risen back to $70,311 million, and it rose further to $98,493 million in 1994. By 1996, with the international industrial cycle well into its upward phase, the U.S. trade deficit was above $100,000 million once again. But that was before the outbreak of the Asian crisis in 1997.
The Asian economic crisis of 1997 and the second ‘super-dollar’ episode
The economic crisis that began in Thailand in July 1997 and then rapidly spread through many “third world” countries sent both the U.S. dollar and the U.S. trade deficit soaring. During the mid-1990s, as the world economy recovered from the early 1990s recession, bank lending to many of the so-called “tigers” such as Thailand and Indonesia soared. (7)
Soon, many of these countries were running huge balance of trade deficits, which were financed by massive bank loans. However, in 1997 the banks suddenly panicked realizing that many of these loans could not in fact be repaid, and international credit available to “developing” countries—which now included the former Soviet Union and Eastern Europe—abruptly contracted.
Money capitalists panicked and moved their capital back to the safety of the advanced imperialist countries. The U.S. dollar soared, and interest rates plummeted in the United States—while they were soaring in much of the “developing” world. The long boom in mortgage lending, including home equity loans, set off the boom in residential construction that was not to peak until 2006.
The flood tide of Bretton Woods II
As the dollar soared, so did the U.S. balance of trade deficit. By 2006, as the mortgage-housing construction boom peaked, the U.S. trade deficit peaked at $760,359 million!
The turn of the century, essentially Clinton’s second administration, proved to be the flood tide of Bretton Woods II. Money capital that had been invested in “third world” countries such as Indonesia fled to the U.S. dollar. The dollar-price of gold reached the lowest level—or what comes to exactly the same thing, the gold value of the dollar reached its highest level—of the entire post-Volcker shock era.
As would be expected, this led to plunging interest rates in the imperialist countries, which launched the mortgage-residential construction boom. As I mentioned above, during the earlier super-dollar episode of the mid-1980s the Reagan administration showed some concern over the soaring trade deficit and moved to devalue the dollar.
The Clinton administration, in contrast, did little or nothing to counteract the soaring trade deficit. Instead, his administration boasted about “unprecedented prosperity” fueled by the sudden flood of mortgage credit and soaring stock markets. The stock market rose sharply led by the “high tech” companies listed on the NASDAQ stock exchange, whose index hit 5,000 in March 2000.
The result was a whole new crop of billionaires as well as a wave of swindling revealed by the wave of bankruptcies that swept the business world after the U.S. economy fell into recession starting in late 2000. The most infamous of these was the collapse of Enron—considered the most innovative of “new economy” companies in 2002.
Enron used virtually every swindling technique that had been developed during the history of capitalism to convince its stock holders that it was making huge profits. In the end, the stock holders, including those Enron employees who had been convinced to put their retirement savings into its shares, lost everything.
True, the official unemployment—helped by revisions in how the rate of unemployment was calculated as well as by the soaring prison population (8)—did drop to its lowest levels of the “Great Moderation,” though even official unemployment soon rose again as the flood of imports helped trigger a new U.S. recession.
Clinton was joined in his boasting about the U.S. economy’s alleged stellar performance by the far right-wing head of the Federal Reserve Board, Allen Greenspan. Though Greenspan was a Republican, in his book “The Age of Turbulence,” he indicated that it was Clinton and not Reagan or either Bush who best represented the economic policies he—Greenspan—favored. He was particularly pleased by Clinton’s move to balance the budget and abolish “welfare as we know it.”
Greenspan “explained” that in the “information age” industrial production and employment was no longer an important criteria for the strength of a nation’s economy. Based on “information” and the Internet, the United States was set for decades of continuing prosperity. The Clinton administration and Democratic party supporters advanced exactly the same arguments.
One of Clinton’s arguments for his “welfare reform”—actually a counter-reform—was that with permanent prosperity now assured by the U.S. leadership of the new “information- and Internet-based economy” that U.S. workers would always be able to find jobs in future. This would be particularly true if the whip of hunger that had been weakened by the right to receive welfare was withdrawn. (9)
Unlike Reagan, who confined himself to dismantling the programs Lyndon Johnson’s so-called “Great Society” passed during the 1960s, the Democrat Clinton went after a basic New Deal program enacted during the 1930s.
The decay of the U.S. economy during the Great Moderation
Compared to today’s global depression, the Clinton years do look like the “good old days.” During the last U.S. presidential election, held at the height of panic in November 2008, which elected the current U.S. president, Democrat Barack Obama, the Democrats made much of the “prosperity” of the Clinton years. But did the Clinton years really see an advance for the U.S. economy, or rather were they years of decay and decline?
In Lenin’s famous pamphlet “Imperialism the Highest Stage of Capitalism,” written in 1916 during World War I, the soon-to-be leader of the Russian Revolution examined the development of the world capitalist economy in the early years of the 20th century. He explained that economic development during those generally prosperous years had been dominated by three features.
First and most important was the growth of monopoly. Second, he emphasized the growing role of the banks and other financial institutions—the growth of “finance capital.” And third, he saw growing parasitism of a handful of imperialist countries, which exploited the vast majority of countries—many in those days still colonies of the exploiting countries.
After World War II, some of these trends seemed to reverse themselves. The fall in interest rates as a result of the Depression strengthened the position of industrial and commercial capitalists relative to money capitalists. The rising profit of enterprise helped make possible a powerful upswing of industrial production, especially in Western Europe and Japan, but to a lesser degree in the United States and Britain, as well.
The new wave of industrialization in the imperialist countries highlighted the role of the industrial corporations as opposed to financial institutions. Observing these early postwar trends, Paul Sweezy, Ernest Mandel and other leading Marxist economists of the day emphasized the reduced role of “finance capital” and the increased power of the industrial corporations.
The Marxists of the post-World War II period tended to see the monopoly of industrial production of the a handful of imperialist countries as a virtually permanent feature of the world capitalist economy.
However, unlike what many Marxists of the post-World War II generations—who often were more likely to refer to Lenin’s famous pamphlet “Imperialism the Highest Stage of Capitalism” than to carefully read it—claimed, this was not actually Lenin’s view.
Lenin noted the relative—in those days it was still relative, not absolute—decline of the number of workers in basic industry in England and Wales. England had dominated world capitalism during the 19th century much as the United States was to dominate it during the 20th century. Between 1851 and 1901, according to Lenin’s figures, the total population of England and Wales rose from 17.9 million to 32.5 million. But the workers employed in basic industry grew only from 4.1 million in 1851 to 4.9 million in 1901. Overall, the workers in basic industry were 23 percent of the population in 1851 but only 15 percent in 1901.
The role of the British industrial capitalists, far from growing, was declining, and the number of British factory workers was barely growing. Lenin saw these trends as a sure sign of the decline of British capitalism, not its “advance” into a new “post-industrial stage.”
And though these tendencies were most advanced in Britain—and perhaps still are, though the United States has been catching up in this regard—he saw the beginnings of similar trends in the other imperialist countries.
Second, Lenin saw the tendency of basic industry—still only incipient at that time—to move from the imperialist countries to the oppressed countries. “The description of ‘British imperialism’ in Schulze-Gaevernitz’s book,” Lenin wrote, “reveals the same parasitical traits. The national income of Great Britain approximately doubled from 1865 to 1898, while the income ‘from abroad’ increased ninefold in the same period. While the ‘merit’ of imperialism is that it ‘trains the Negro to habits of industry’ (you cannot manage without coercion … )….”
However Schulze-Gaevernitz saw a great “danger” in this. And what was this danger? The danger was that “‘Europe will shift the burden of physical toil—first agricultural and mining, then the rougher work in industry’—on to the coloured races, and itself be content with the role of rentier, and in this way, perhaps, pave the way for the economic, and later, the political emancipation of the coloured races.’”
A grave danger indeed from the viewpoint European racists such as Schulze-Gaevernitz!
While the industrialization of Africa—notwithstanding Schulze-Gaevernitz’s fears—is still unfortunately lying somewhere in the future, the industrialization of Asia is advancing by leaps and bounds. In the meantime, the decaying United States—and Western Europe and Japan—are increasingly simply either collecting dividends and interest payments, or accumulating “wealth” in the form of patents and copyrights—for example, patents on industrial processes but also on computer software algorithms, and copyrights on movies and music. Hollywood is becoming a very important U.S. export industry! Isn’t all this at the heart of the “new economy” that Clinton and Greenspan boasted about?
On the other hand, due to great people’s revolutions that occurred in their countries, the present-day governments of China and Vietnam are able to use the “shift” of the “rougher work in industry on to the coloured races” for the “economic and … political emancipation of the coloured races” of their countries. Today, U.S. foreign policy, whether under Bush or now Obama, is to prevent similar people’s revolutions in other countries from leading to the “economic” and “political emancipation of” the other “coloured races.”
These tendencies to decay noted by Lenin almost a century ago are now at work in all the economies of the imperialist countries, but on steroids compared to the situation a century ago. Instead of a slow growth in factory employment, for example, we in the United States see a 30-year-long decline in factory employment!
At the other pole, we see the economic rise of Asia, especially China with its incredible industrial growth, even if this growth is for now being achieved largely on a capitalist basis. However, the material conditions for China’s final liberation from imperialist domination of the world economy and the social liberation of its working class that were so lacking in 1949 when “China first stood up” are now being created on a gigantic level on a daily basis.
This is exactly the “nightmare” that European racists like the now justly forgotten Schulze-Gaevernitz were so concerned about a century ago. And it explains why U.S. foreign policy is dedicated to preventing other countries from “standing up” the way China stood up in 1949.
Writing with the hindsight that postwar Marxist economists such as Paul Sweezy, Ernest Mandel and other post-World War II era Marxists lacked, we can now see that the aftermath of the Depression temporarily slowed and even reversed some of the tendencies to decay that were noted by Lenin in his “Imperialism.” For example, there was a renewed surge in factory employment.
But in the long run, far from ending the decay of “Western” capitalism, Keynesian economic policies on the contrary actually accelerated the trends of decay that were highlighted by Lenin.
For example, wasn’t it precisely the “Keynesian economic policies” that drove interest rates up to unheard-of levels for a protracted period of time? This created a springtime for the “money capitalists” at the expense of the industrial and commercial capitalists, notwithstanding Keynes’s own hopes and predictions about the “euthanasia of the rentier.”
The decline in factory employment in the imperialist countries—especially the United States—went hand and hand with the growth of the power of finance capital. In the fall of 2008, the “dictatorship of finance capital” came out into the open for all to see. Today, imperialism is especially rotten in the United States, its very center.
The United States, though it is the center of the most powerful empire in history, is not even a creditor, it is a debtor! This is the context in which the crisis of 2007-09 broke out, the subject of next week’s post.
1 It’s been known by economists since at least the 18th century that the issue of currency has to have not only a flux mechanism but a reflux mechanism as well. For example, a government that prints its own legal tender currency issues or “fluxes” the currency when it spends money, but “refluxed” when it collects taxes. A bank of issue “fluxes” the currency when it loans money, and “refluxes” it when the debts created by its loans are repaid.
Under the dollar standard, the dollar is “fluxed” when the United States imports commodities and pays dollars for those commodities. The dollars are “refluxed” as a result of exports, causing the dollars to flow back to the issuer, the United States. The dollar is also “fluxed” back to the United States when a country that exports to the United States uses the dollars it earns to make loans back to the United States—for example, by buying U.S. Treasuries.
If this did not happen, the dollar would quickly flood the circuits of international trade, leading to its collapse against gold and foreign currencies. This would cause the internal currency and credit system of the United States to collapse. Demand would collapse within the U.S. economy, causing imports to collapse, while domestic U.S. businesses would be forced to find markets abroad through massive exports.
However, as long as the dollar standard lasts, foreign governments, central banks and corporations have to hold a reserve fund in dollar-denominated assets in order to purchase dollar-denominated commodities and meet dollar-denominated debts that are falling due. As long as foreign governments, central banks and international corporations are willing to invest their surplus dollars in dollar-denominated assets as opposed to using the surplus dollars to purchase gold or other currencies, the United States can import more than it exports, though it has to run up a growing dollar-denominated foreign debt in order to do this.
2 Alan Greenspan, a Republican, was a far-right ideologue, though much more flexible in practice than Milton Friedman or the Austrian economists who had influenced him. He was a member of the extreme right-wing Ayn Rand circle. Rand and her family had fled the Bolshevik Revolution in Russia and emigrated to America when their family business was nationalized by the workers’ revolution. As a result, Rand had a lifelong hatred of the workers’ movement, not only the revolutionary wing but even the liberal reformist wing.
Ronald Reagan appointed Greenspan to replace the Democrat Volcker in 1987. During his term in office, Greenspan was hailed by the U.S. media as a “maestro” who had masterminded the “prosperity” of the “Great Moderation.” Greenspan hailed the boom in mortgages, including sub-prime mortgages, and home construction that developed from 1997 onwards. He used none of his regulatory powers that the Federal Reserve System possessed to check the activity of sub-prime mortgage swindlers, who entrapped many unwary first-time home buyers into purchasing homes that they could not possibly pay for. After the panic began in August 2007, the reputation of Greenspan—by then retired—plummeted and he became even something of a scapegoat for the crisis.
3 The rate of interest moves inversely to the price of securities. The extremely high interest rates immediately after the Volcker shock meant that the prices of securities, both stocks and bonds, were extremely low. Money capitalists bought these securities at very low prices and were vastly enriched when they soared in value as interest rates gradually fell in the years that followed the Volcker shock.
4 The bonds issued by the Federal government of the United States are often described as risk free. It is true that it seems extremely unlikely the United States government would ever default on these bonds, because it owns the presses that print the dollars that it owes on these bonds. However, if the U.S. currency is sharply devalued, like it was between 1968 and the Volcker shock, the owners of U.S. government bonds can take a major loss in terms of real money—gold. So they are not exactly risk free—the media and the bourgeois economists notwithstanding. There is less risk in the short-term Treasury notes that run for as little as three months. But a sharp devaluation of the dollar over even a three-month period is far from impossible.
5 If it is caused by unchanging “human nature,” why then does the “business cycle” only manifest itself under the capitalist mode of production, and even then not until 1825 onward? Business cycles were not known in the Soviet Union or its Eastern European allies when their planned economies were in effect, though they have reappeared in these countries since capitalist relations were fully restored from 1989 onward.
6 The falling U.S. trade deficit implied a lowering of the U.S. standard of living. Indeed, the recession of 1990-91 saw a considerable contraction of credit expressed through the collapse of the U.S. saving and loans system, which had issued much of the total quantity of mortgages that financed residential construction. The real-estate market and residential construction fell into crisis. This probably played the decisive role in the defeat of incumbent George Bush the elder in the November 1992 U.S. presidential election by his Democratic challenger Bill Clinton.
7 Though the crisis began in Thailand in July 1997, Indonesia turned out to be the Asian country hardest hit by the crisis. The Indonesian dictatorship headed by General Suharto had emerged from a bloody CIA-supported counterrevolution in 1965 that had destroyed Indonesia’s large Communist Party.
A million or more people died during the reign of terror that accompanied the rise of Suharto, far more than the number killed by the fascist dictatorships of Mussolini and Hitler before the outbreak of World War II. After World War II began, however, many more than that died at the hands of the fascists, including the six million European Jews who died in the Nazi holocaust and millions of others as well, including Roma people who are once again being scape-goated by today’s European fascists. So even under Suharto, the Indonesian reactionaries could not compete with the fascists of white civilized Europe in this respect.
Before the July-August crash of 1997, the bourgeois media hailed Indonesia as a “tiger economy” implying that under the pro-U.S. Suharto regime Indonesia was experiencing a rapid industrialization. Indonesia, in order to attract the foreign capital to industrialize, had allowed foreign money lenders to freely withdraw their capital whenever they chose, a policy quite unlike that of China or Vietnam, for example. When panic struck in the summer of 1997, foreign capitalists quickly withdrew their money capital, and the Indonesian economy collapsed virtually overnight. The now completely discredited dictator was soon ousted, but Indonesia has never again been described as a “tiger economy.”
8 If a prison population that was approaching 2 million under Clinton had been counted as unemployed, the official rate of unemployment would have been considerably higher.
9 The demand for “relief” had emerged as a main demand raised by the councils of the unemployed that had been organized in the United States during the super-crisis of 1929-33. Under the New Deal, if a worker exhausted his or her unemployment insurance or hadn’t been able to get a job in the first place, making him or her eligible for unemployment insurance—there would be welfare to stave off out-and-out starvation. Clinton’s “reform,” however, limited the right of a worker to receive welfare to only five years. Clinton explained that with the U.S. economy doing so well, and jobs now so plentiful, workers would always be able to find jobs, so they would no longer need a right to welfare.