Sunday, December 20, 2009

Bernanke Answers Economist's Questions

Several economists submitted the following questions to senator to be asked during the confirmation hearing of Federal Reserve Chairman Ben Bernanke.

A. Anil Kashyap, University of Chicago Booth Graduate School of Business: With the unemployment rate hovering around 10%, the public seems outraged at the combination of three things: a) substantial TARP support to keep some firms alive, b) allowing these firms to pay back the TARP money quickly, c) no constraints on pay or other behavior once the money was repaid. Was it a mistake to allow b) and/or c)?

TARP capital purchase program investments were always intended to be limited in duration. Indeed, the step-up in the dividend rate over time and the reduction in TARP warrants following certain private equity raises were designed to encourage TARP recipients to replace TARP funds with private equity as soon as practical. As market conditions have improved, some institutions have been able to access new sources of capital sooner than was originally anticipated and have demonstrated through stress testing that they possess resources sufficient to maintain sound capital positions over future quarters. In light of their ability to raise private capital and meet other supervisory expectations, some companies have been allowed to repay or replace their TARP obligations. No targeted constraints have been placed on companies that have repaid TARP investments. However, these companies remain subject to the full range of supervisory requirements and rules. The Federal Reserve has taken steps to address compensation practices across all firms that we supervise, not just TARP recipients. Moreover, in response to the recent crisis, supervisors have undertaken a comprehensive review of prudential standards that will likely result in more stringent requirements for capital, liquidity, and risk management for all financial institutions, including those that participated in the TARP programs.

B. Mark Thoma, University of Oregon and blogger: What is the single, most important cause of the crisis and what s being done to prevent its reoccurrence? The proposed regulatory structure seems to take as given that large, potentially systemically important firms will exist, hence, the call for ready, on the shelf plans for the dissolution of such firms and for the authority to dissolve them. Why are large firms necessary? Would breaking them up reduce risk?

The principal cause of the financial crisis and economic slowdown was the collapse of the global credit boom and the ensuing problems at financial institutions, triggered by the end of the housing expansion in the United States and other countries. Financial institutions have been adversely affected by the financial crisis itself, as well as by the ensuing economic downturn.

This crisis did not begin with depositor runs on banks, but with investor runs on firms that financed their holdings of securities in the wholesale money markets. Much of this occurred outside of the supervisory framework currently established. An effective agenda for containing systemic risk thus requires elimination of gaps in the regulatory structure, a focus on macroprudential risks, and adjustments by all our financial regulatory agencies.

Supervisors in the United States and abroad are now actively reviewing prudential standards and supervisory approaches to incorporate the lessons of the crisis. For our part, the Federal Reserve is participating in a range of joint efforts to ensure that large, systemically critical financial institutions hold more and higher-quality capital, improve their risk-management practices, have more robust liquidity management, employ compensation structures that provide appropriate performance and risk-taking incentives, and deal fairly with consumers. On the supervisory front, we are taking steps to strengthen oversight and enforcement, particularly at the firm-wide level, and we are augmenting our traditional microprudential, or firm-specific, methods of oversight with a more macroprudential, or system-wide, approach that should help us better anticipate and mitigate broader threats to financial stability.

Although regulators can do a great deal on their own to improve financial regulation and oversight, the Congress also must act to address the extremely serious problem posed by firms perceived as “too big to fail.” Legislative action is needed to create new mechanisms for oversight of the financial system as a whole. Two important elements would be to subject all systemically important financial firms to effective consolidated supervision and to establish procedures for winding down a failing, systemically critical institution to avoid seriously damaging the financial system and the economy.

Some observers have suggested that existing large firms should be split up into smaller, not-toobig- to-fail entities in order to reduce risk. While this idea may be worth considering, policymakers should also consider that size may, in some cases, confer genuine economic benefits. For example, large firms may be better able to meet the needs of global customers. Moreover, size alone is not a sufficient indicator of systemic risk and, as history shows, smaller firms can also be involved in systemic crises. Two other important indicators of systemic risk, aside from size, are the degree to which a firm is interconnected with other financial firms and markets, and the degree to which a firm provides critical financial services. An alternative to limiting size in order to reduce risk would be to implement a more effective system of macroprudential regulation. One hallmark of such a system would be comprehensive and vigorous consolidated supervision of all systemically important financial firms. Under such a system, supervisors could, for example, prohibit firms from engaging in certain activities when those firms lack the managerial capacity and risk controls to engage in such activities safely. Congress has an important role to play in the creation of a more robust system of financial regulation, by establishing a process that would allow a failing, systemically important non-bank financial institution to be wound down in an orderly fashion, without jeopardizing financial stability. Such a resolution process would be the logical complement to the process already available to the FDIC for the resolution of banks.

C. Simon Johnson, Massachusetts Institute of Technology and blogger: Andrew Haldane, head of financial stability at the Bank of England, argues that the relationship between the banking system and the government (in the U.K. and the U.S.) creates a “doom loop” in which there are repeated boom-bust-bailout cycles that tend to get cost the taxpayer more and pose greater threat to the macro economy over time. What can be done to break this loop?

The “doom loop” that Andrew Haldane describes is a consequence of the problem of moral hazard in which the existence of explicit government backstops (such as deposit insurance or liquidity facilities) or of presumed government support leads firms to take on more risk or rely on less robust funding than they would otherwise. A new regulatory structure should address this problem. In particular, a stronger financial regulatory structure would include: a consolidated supervisory framework for all financial institutions that may pose significant risk to the financial system; consideration in this framework of the risks that an entity may pose, either through its own actions or through interactions with other firms or markets, to the broader financial system; a systemic risk oversight council to identify, and coordinate responses to, emerging risks to financial stability; and a new special resolution process that would allow the government to wind down in an orderly way a failing systemically important nonbank financial institution (the disorderly failure of which would otherwise threaten the entire financial system), while also imposing losses on the firm’s shareholders and creditors. The imposition of losses would reduce the costs to taxpayers should a failure occur.

D. Brad Delong, University of California at Berkeley and blogger: Why haven’t you adopted a 3% per year inflation target?

The public’s understanding of the Federal Reserve’s commitment to price stability helps to anchor inflation expectations and enhances the effectiveness of monetary policy, thereby contributing to stability in both prices and economic activity. Indeed, the longer-run inflation expectations of households and businesses have remained very stable over recent years. The Federal Reserve has not followed the suggestion of some that it pursue a monetary policy strategy aimed at pushing up longer-run inflation expectations. In theory, such an approach could reduce real interest rates and so stimulate spending and output. However, that theoretical argument ignores the risk that such a policy could cause the public to lose confidence in the central bank’s willingness to resist further upward shifts in inflation, and so undermine the effectiveness of monetary policy going forward. The anchoring of inflation expectations is a hard-won success that has been achieved over the course of three decades, and this stability cannot be taken for granted. Therefore, the Federal Reserve’s policy actions as well as its communications have been aimed at keeping inflation expectations firmly anchored.

for more, click here

Update: The last question posed by Brad de Long seems to have started an interesting debate. Paul krugman describes the position Bernanke has taken to the one Montagu Norman took during the great depression. Read more here.

Thursday, December 17, 2009

Sunday, December 13, 2009

People Respond to Incentives

Here’s what happened: in the irrational exuberance of the housing bubble, many people were allowed/encouraged/suckered into buying houses with very little money down, at prices much higher than justified by fundamental. Now the prices are coming back to earth, and many of these people find themselves owning houses worth considerably less than their mortgages.

This leads to both involuntary and voluntary foreclosure. Involuntary foreclosure comes when people who thought they could deal with unaffordable mortgage payments by refinancing find that you can’t refinance when your home is worth less than you owe.

Voluntary foreclosure comes when people simply walk away, either because the mortgage is “nonrecourse” — the bank can seize the house, but no more — or because they figure, probably correctly, that the bank won’t really try to pursue them.

Hal Varian (Berkeley prof and now chief economist at Google) puts it simply, and in somewhat exaggerated form, by saying that everyone will just default on their existing mortgage and move one house to the left, buying a new house for less than they save by walking away from the mortgage they have. Things don’t work that smoothly, but that gets the principle right.

And what that means is that a substantial portion of the decline in housing values that’s now in progress will eventually show up as losses, not to homeowners, but to investors. We’re talking about some significant fraction of, say, $6 trillion (a 30% decline in home values from their peak). A trillion dollars in investor losses sounds quite reasonable.

American Dream 2: Default, Then Rent

Mortgage Crisis Spreads Past Subprime Loans

Greg Mankiw on Keynesian Medicine

IMAGINE you are a physician and a patient arrives in your office with a troubling and mysterious disease. Some of the symptoms are familiar, but others are not. You have never treated anyone with quite this set of problems.

Based on your training and experience, imperfect as it is, you come up with a proposed remedy. The patient leaves with a prescription in hand. You hope and pray that it works.

A week later, however, the patient comes back and the symptoms are, in some ways, worse. What do you do now? You have three options:

STAY THE COURSE Perhaps the patient was sicker than you thought, and it will take longer for your remedy to kick in.

UP THE DOSAGE Perhaps the remedy was right but the quantity was wrong. The patient might need more medicine.

RETHINK THE REMEDY Perhaps the treatment you prescribed wasn’t right after all. Maybe a different mixture of medicines would work better.

Choosing among these three reasonable courses of action is not easy. In many ways, the Obama administration faces a similar situation right now.

When President Obama was elected, the economy was sick and getting sicker. Before he was even in office in January, his economic team released a report on the problem.

If nothing was done, the report said, the unemployment rate would keep rising, reaching 9 percent in early 2010. But if the nation embarked on a fiscal stimulus of $775 billion, mainly in the form of increased government spending, the unemployment rate was predicted to stay under 8 percent.

In fact, the Congress passed a sizable fiscal stimulus. Yet things turned out worse than the White House expected. The unemployment rate is now 10 percent — a full percentage point above what the administration economists said would occur without any stimulus.

To be sure, there are some positive signs, like reduced credit spreads, gross domestic product growth and diminishing job losses. But the recovery is not yet as robust as the president and his economic team had originally hoped.

So what to do now? The administration seems most intent on staying the course, although in a speech Tuesday, the president showed interest in upping the dosage. The better path, however, might be to rethink the remedy.

When devising its fiscal package, the Obama administration relied on conventional economic models based in part on ideas of John Maynard Keynes. Keynesian theory says that government spending is more potent than tax policy for jump-starting a stalled economy.

The report in January put numbers to this conclusion. It says that an extra dollar of government spending raises G.D.P. by $1.57, while a dollar of tax cuts raises G.D.P. by only 99 cents. The implication is that if we are going to increase the budget deficit to promote growth and jobs, it is better to spend more than tax less.

But various recent studies suggest that conventional wisdom is backward.

One piece of evidence comes from Christina D. Romer, the chairwoman of the president’s Council of Economic Advisers. In work with her husband, David H. Romer, written at the University of California, Berkeley, just months before she took her current job, Ms. Romer found that tax policy has a powerful influence on economic activity.

According to the Romers, each dollar of tax cuts has historically raised G.D.P. by about $3 — three times the figure used in the administration report. That is also far greater than most estimates of the effects of government spending.

Other recent work supports the Romers’ findings. In a December 2008 working paper, Andrew Mountford of the University of London and Harald Uhlig of the University of Chicago apply state-of-the-art statistical tools to United States data to compare the effects of deficit-financed spending, deficit-financed tax cuts and tax-financed spending. They report that “deficit-financed tax cuts work best among these three scenarios to improve G.D.P.”

My Harvard colleagues Alberto Alesina and Silvia Ardagna have recently conducted a comprehensive analysis of the issue. In an October study, they looked at large changes in fiscal policy in 21 nations in the Organization for Economic Cooperation and Development. They identified 91 episodes since 1970 in which policy moved to stimulate the economy. They then compared the policy interventions that succeeded — that is, those that were actually followed by robust growth — with those that failed.

The results are striking. Successful stimulus relies almost entirely on cuts in business and income taxes. Failed stimulus relies mostly on increases in government spending.

All these findings suggest that conventional models leave something out. A clue as to what that might be can be found in a 2002 study by Olivier Blanchard and Roberto Perotti. (Mr. Perotti is a professor at Boccini University in Milano, Italy; Mr. Blanchard is now chief economist at the International Monetary Fund.) They report that “both increases in taxes and increases in government spending have a strong negative effect on private investment spending. This effect is difficult to reconcile with Keynesian theory.”

These studies point toward tax policy as the best fiscal tool to combat recession, particularly tax changes that influence incentives to invest, like an investment tax credit. Sending out lump-sum rebates, as was done in spring 2008, makes less sense, as it provides little impetus for spending or production.

LIKE our doctor facing a mysterious illness, economists should remain humble and open-minded when considering how best to fix an ailing economy. A growing body of evidence suggests that traditional Keynesian nostrums might not be the best medicine.

Friday, December 11, 2009

HBR Article: What Makes a Leader?

It was Daniel Goleman who first brought the term “emotional intelligence” to a wide audience with his 1995 book of that name, and it was Goleman who first applied the concept to business with his 1998 HBR article, reprinted here. In his research at nearly 200 large, global companies, Goleman found that while the qualities traditionally associated with leadership—such as intelligence, toughness, determination, and vision—are required for success, they are insufficient. Truly effective leaders are also distinguished by a high degree of emotional intelligence, which includes self-awareness, self-regulation, motivation, empathy, and social skill.

These qualities may sound “soft” and unbusinesslike, but Goleman found direct ties between emotional intelligence and measurable business results. While emotional intelligence’s relevance to business has continued to spark debate over the past six years, Goleman’s article remains the definitive reference on the subject, with a description of each component of emotional intelligence and a detailed discussion of how to recognize it in potential leaders, how and why it connects to performance, and how it can be learned.

Every businessperson knows a story about a highly intelligent, highly skilled executive who was promoted into a leadership position only to fail at the job. And they also know a story about someone with solid—but not extraordinary—intellectual abilities and technical skills who was promoted into a similar position and then soared.

Such anecdotes support the widespread belief that identifying individuals with the “right stuff” to be leaders is more art than science. After all, the personal styles of superb leaders vary: Some leaders are subdued and analytical; others shout their manifestos from the mountaintops. And just as important, different situations call for different types of leadership. Most mergers need a sensitive negotiator at the helm, whereas many turnarounds require a more forceful authority.

I have found, however, that the most effective leaders are alike in one crucial way: They all have a high degree of what has come to be known as emotional intelligence. It’s not that IQ and technical skills are irrelevant. They do matter, but mainly as “threshold capabilities”; that is, they are the entry-level requirements for executive positions. But my research, along with other recent studies, clearly shows that emotional intelligence is the sine qua non of leadership. Without it, a person can have the best training in the world, an incisive, analytical mind, and an endless supply of smart ideas, but he still won’t make a great leader.

In the course of the past year, my colleagues and I have focused on how emotional intelligence operates at work. We have examined the relationship between emotional intelligence and effective performance, especially in leaders. And we have observed how emotional intelligence shows itself on the job. How can you tell if someone has high emotional intelligence, for example, and how can you recognize it in yourself? In the following pages, we’ll explore these questions, taking each of the components of emotional intelligence—self-awareness, self-regulation, motivation, empathy, and social skill—in turn.

Evaluating Emotional Intelligence

Most large companies today have employed trained psychologists to develop what are known as “competency models” to aid them in identifying, training, and promoting likely stars in the leadership firmament. The psychologists have also developed such models for lower-level positions. And in recent years, I have analyzed competency models from 188 companies, most of which were large and global and included the likes of Lucent Technologies, British Airways, and Credit Suisse.

In carrying out this work, my objective was to determine which personal capabilities drove outstanding performance within these organizations, and to what degree they did so. I grouped capabilities into three categories: purely technical skills like accounting and business planning; cognitive abilities like analytical reasoning; and competencies demonstrating emotional intelligence, such as the ability to work with others and effectiveness in leading change.

To create some of the competency models, psychologists asked senior managers at the companies to identify the capabilities that typified the organization’s most outstanding leaders. To create other models, the psychologists used objective criteria, such as a division’s profitability, to differentiate the star performers at senior levels within their organizations from the average ones. Those individuals were then extensively interviewed and tested, and their capabilities were compared. This process resulted in the creation of lists of ingredients for highly effective leaders. The lists ranged in length from seven to 15 items and included such ingredients as initiative and strategic vision.

When I analyzed all this data, I found dramatic results. To be sure, intellect was a driver of outstanding performance. Cognitive skills such as big-picture thinking and long-term vision were particularly important. But when I calculated the ratio of technical skills, IQ, and emotional intelligence as ingredients of excellent performance, emotional intelligence proved to be twice as important as the others for jobs at all levels.

Cause and Defect

“LIKE elaborately plumed birds…we preen and strut and display our t-values.” That was Edward Leamer’s uncharitable description of his profession in 1983. Mr Leamer, an economist at the University of California in Los Angeles, was frustrated by empirical economists’ emphasis on measures of correlation over underlying questions of cause and effect, such as whether people who spend more years in school go on to earn more in later life. Hardly anyone, he wrote gloomily, “takes anyone else’s data analyses seriously”. To make his point, Mr Leamer showed how different (but apparently reasonable) choices about which variables to include in an analysis of the effect of capital punishment on murder rates could lead to the conclusion that the death penalty led to more murders, fewer murders, or had no effect at all.

In the years since, economists have focused much more explicitly on improving the analysis of cause and effect, giving rise to what Guido Imbens of Harvard University calls “the causal literature”. The techniques at the heart of this literature—in particular, the use of so-called “instrumental variables”—have yielded insights into everything from the link between abortion and crime to the economic return from education. But these methods are themselves now coming under attack.

Instrumental variables have become popular in part because they allow economists to deal with one of the main obstacles to the accurate estimation of causal effects—the impossibility of controlling for every last influence. Mr Leamer’s work on capital punishment demonstrated that the choice of controls matters hugely. Putting too many variables into a model ends up degrading the results. Worst of all, some relevant variables may simply not be observable. For example, the time someone stays in school is probably influenced by his innate scholastic ability, but this is very hard to measure. Leaving such variables out can easily lead econometricians astray. What is more, the direction of causation is not always clear. Working out whether deploying more policemen reduces crime, for example, is confused by the fact that more policemen are allocated to areas with higher crime rates.

Instrumental variables are helpful in all these situations. Often derived from a quirk in the environment or in public policy, they affect the outcome (a person’s earnings, say, to return to the original example) only through their influence on the input variable (in this case, the number of years of schooling) while at the same time being uncorrelated with what is left out (scholastic ability). The job of instrumental variables is to ensure that the omission of factors from an analysis—in this example, the impact of scholastic ability on the amount of schooling—does not end up producing inaccurate results.

In an influential early example of this sort of study, Joshua Angrist of the Massachusetts Institute of Technology (MIT) and Alan Krueger of Princeton University used America’s education laws to create an instrumental variable based on years of schooling. These laws mean that children born earlier in the year are older when they start school than those born later in the year, which means they have received less schooling by the time they reach the legal leaving-age. Since a child’s birth date is unrelated to intrinsic ability, it is a good instrument for teasing out schooling’s true effect on wages. Over time, uses of such instrumental variables have become a standard part of economists’ set of tools. Freakonomics, the 2005 bestseller by Steven Levitt and Stephen Dubner, provides a popular treatment of many of the techniques. Mr Levitt’s analysis of crime during American election cycles, when police numbers rise for reasons unconnected to crime rates, is a celebrated example of an instrumental variable.

Two recent papers—one by James Heckman of Chicago University and Sergio Urzua of Northwestern University, and another by Angus Deaton of Princeton—are sharply critical of this approach. The authors argue that the causal effects that instrumental strategies identify are uninteresting because such techniques often give answers to narrow questions. The results from the quarter-of-birth study, for example, do not say much about the returns from education for college graduates, whose choices were unlikely to have been affected by when they were legally eligible to drop out of school. According to Mr Deaton, using such instruments to estimate causal parameters is like choosing to let light “fall where it may, and then proclaim[ing] that whatever it illuminates is what we were looking for all along.”

IV leagues

This is too harsh. It is no doubt possible to use instrumental variables to estimate effects on uninteresting subgroups of the population. But the quarter-of-birth study, for example, shone light on something that was both interesting and significant. The instrumental variable in this instance allows a clear, credible estimate of the return from extra schooling for those most inclined to drop out from school early. These are precisely the people whom a policy that sought to prolong the amount of education would target. Proponents of instrumental variables also argue that accurate answers to narrower questions are more useful than unreliable answers to wider questions.

A more legitimate fear is that important questions for which no good instrumental variables can be found are getting short shrift because of economists’ obsession with solving statistical problems. Mr Deaton says that instrumental variables encourage economists to avoid “thinking about how and why things work”. Striking a balance between accuracy of result and importance of issue is tricky. If economists end up going too far in emphasising accuracy, they may succeed in taking “the con out of econometrics”, as Mr Leamer urged them to—only to leave more pressing questions on the shelf.


What if Lehman had not failed, would the crisis have happened anyway?

IN AUGUST 2008 Kenneth Rogoff, a Harvard University economist, briefly rocked world stockmarkets when he warned a conference in Singapore: “We’re not just going to see midsized banks go under in the next few months, we’re going to see a whopper, we’re going to see a big one—one of the big investment banks or big banks.” A month later, in the early hours of September 15th, Lehman Brothers filed for bankruptcy.

Harold James, an economic historian at Princeton University, says Lehman’s failure was analogous to the collapse of Creditanstalt, a big Austrian bank, in 1931. Austria and Germany had borrowed heavily from foreign creditors and the bank’s failure rippled around the world, vastly intensifying the Depression. Lehman’s failure is widely seen as a similar turning-point in the current financial crisis: an unexpected blunder that came close to turning a garden-variety recession into another Depression.

Mr Rogoff has a different view. He believes, as he did the month before Lehman’s collapse, that America had the classic preconditions of a massive financial crisis: trillions of dollars of debt secured by an inexorably deflating asset bubble. Bank write-downs already totalled more than $500 billion in August 2008. If Lehman had not been allowed to fail, some other firm would have, with similar results.

The week before Lehman failed, futures markets predicted a 15% decline in the prices of homes in major metropolitan markets in America over the next nine months, on top of the 24% decline that had already occurred. Such a drop—which is quite close to what actually occurred—would, when combined with similar declines in commercial-property values, have pushed some big banks to the edge. That same week derivatives markets put the odds of default for Washington Mutual, a large thrift (or savings bank), at 85%. Many of the big financial institutions that received bail-outs in the wake of Lehman’s failure, such as American International Group (AIG) and UBS and Fortis in Europe, would probably have needed one sooner or later anyway.

True, from a purely economic point of view, Lehman’s failure created enormous pain. It spawned a panic in the commercial-paper, credit-derivatives and bank-funding markets that dramatically worsened banks’ liquidity. Capital and trade flows collapsed. A vicious spiral of credit withdrawal, weakening growth and debt impairment ensued. In July 2008 the IMF thought the world economy would grow by 3.9% in 2009. It now thinks it will shrink by 1.4%. Moody’s Economy.com forecast in August 2008 that 2.9m first mortgages in America would default in 2009, itself a disastrous figure (less than a million defaulted in 2006). It now expects the tally to hit 3.8m, a result, says Mark Zandi, its chief economist, of a string of policy errors and the resulting financial panic that sapped employment and income.

But from a political point of view, it is harder to see how these missteps could have been avoided. When the Treasury and the Federal Reserve bailed out Bear Stearns in March 2008, they drew criticism in Congress and elsewhere for creating moral hazard. At the same time their intervention led other firms, including Lehman, to believe that when push came to shove, they too would be spared. Had Lehman been rescued the criticism would have intensified, as would firms’ expectations of future rescues. Mr Rogoff maintains that at some point political pressures would have required a big firm to go bust. “If you look at financial crises, the standard playbook is to let the fourth or fifth largest bank go under and you save everybody else,” says Mr Rogoff, who bases much of his analysis on extensive research done with Carmen Reinhart of the University of Maryland for a forthcoming book called “This Time Is Different: Eight Centuries of Financial Folly” (see article).

Wise after the fact

In retrospect, the economically efficient solution would have been, soon after the Bear Stearns rescue, to propose a comprehensive, publicly financed recapitalisation of the banking system while creating a more orderly mechanism for winding up failed institutions (officials still claim they did not have the legal authority to save Lehman). The Treasury and the Fed drew up plans to do just that but worried that Congress would reject them. With good reason: history shows that bank bail-outs are intensely unpopular. Japan’s government dragged its feet on recapitalising its banks in the 1990s because its initial aid provoked such outrage. Some American economists who used to carp at Japan’s failings are more sympathetic now. “It is easier to be for more radical solutions when one lives thousands of miles away than when it is one’s own country,” Larry Summers, Barack Obama’s economic adviser, told the Financial Times earlier this year.

If Mr Rogoff is right and more failures were inevitable, then Lehman’s collapse, though painful, may have been necessary. History suggests that systemic banking crises are usually resolved with large injections of public capital. Lehman’s failure galvanised policymakers. Only when faced with the post-Lehman, post-AIG chaos did Congress pass the $700 billion Troubled Asset Relief Programme (and even then, after an initial rejection). Other rich-country governments also moved to guarantee bank debts, raise deposit insurance and inject capital into their banks.

Mr James notes that when Creditanstalt went under in 1931, political tensions hampered international co-operation. The international response to Lehman’s collapse, reflecting in part the lessons of the 1930s, was much more effective—as exemplified by the Treasury’s willingness to bail out AIG, even though many of the main beneficiaries were European. Mr Rogoff concedes that if the Federal Reserve and Treasury had made flawless decisions unhindered by politics, the outcome would have been happier. But, he says, “it wasn’t humanly possible.” Lehman’s collapse may even have hastened the ultimate resolution of the crisis.

Green with Envy: Tension between free trade and capping emissions

STATEMENTS by Barack Obama on his travels through Asia have lowered expectations that December’s global summit on climate change in Copenhagen will lead to binding cuts in carbon emissions. The urgency of dealing with climate change means that many countries are drawing up national policies to limit emissions. Yet in a globalised world, where production is increasingly mobile across national borders, some worry that there is a fundamental tension between the effectiveness of such policies and a commitment to open trade.

These carbon-reduction policies, such as America’s proposed cap-and-trade scheme, typically put a price on carbon in the hope that this will force producers to bear the costs that their activities impose on the climate. But if different countries cut emissions by different amounts, as is likely, then the price of carbon will vary across nations. If so, manufacturers in countries with tighter environmental rules will face added costs which foreign competitors do not. This could in turn prompt them to relocate some of their production to “carbon havens”, where the cost of polluting is lower. If enough production emigrates, global emissions might even increase.

The likely scale of relocations may be overstated. A new study* by economists at the World Bank and the Peterson Institute for International Economics, a think-tank in Washington, DC, finds that some production would migrate, but that the net increase in emissions in poor countries would be small.

Just as bananas are best grown in warmer places, imposing a higher carbon price does not compel German manufacturers of capital goods to decamp to China. Also, the increased output of some energy-intensive goods in poorer countries draws some productive resources away from other industries there. Overall, the authors find that if Europe and America were to reduce emissions by 17% from their 2005 levels by 2020, the additional increase in developing-country emissions would be only 1%. Global emissions would still be almost 10% lower than if nothing had been done. So rising global emissions due to carbon leakage are hardly as big a worry as some make them out to be.

That has not stopped many from proposing taxes that would penalise exports from countries that benefit from low carbon prices. From the point of view of countries with stricter environmental rules, it is easy to see why. According to the study, to reduce its emissions by 17% America would have to cut its exports of energy-intensive goods, such as steel, by 12% and its production of such goods by 4%. Domestic producers of energy-intensive goods, on whom much of the burden of adjustment will fall, will demand some form of compensation or protection. No wonder the climate bill passed by the House of Representatives in America has a provision for taxing imports from countries that have laxer rules on emissions. Nicolas Sarkozy, France’s president, has proposed that Europe adopt a similar strategy, arguing that not to do so would amount to “massive aid to relocations”.

It would be best for trade (and only marginally costly for the environment) if there were no carbon-based tax adjustments. But assuming they were put in place, what might be their effect? That in turn depends on the nitty-gritty: would they be based on the amount of carbon dioxide emitted when making an equivalent good at home, or on the amount actually emitted to make the imported good? The latter would penalise developing countries, because they tend to use much more carbon-intensive technologies. So, if making an American car produced ten tonnes of carbon dioxide, taxed at $60 per tonne, then the tax on a foreign-made car arriving in America would only be $600. But if the tariff were based on the carbon dioxide emitted in the process of making a car in China, it could be double that.

Assuming the economists are ignored…

A tax based on the carbon footprint of imports, the authors reckon, would certainly benefit America’s energy-intensive industries, which otherwise bear the full cost of plans to reduce emissions. Their output would fall by only 2.5%, instead of 4%. The trouble is, developing countries would be whacked, since their exports would become markedly less competitive. China’s manufacturing exports would decline by a staggering 21% and India’s by 16%. The border tax adjustment would amount to a prohibitive tariff of 26% on China’s exports and 20% on India’s. No wonder that Chinese officials warned angrily of trade wars if border taxes were imposed. A tax based on the carbon footprint of domestic production would be much more benign in its trade effects, reducing China’s and India’s exports by around 3%.

From a global perspective, the case for a trade tax to reduce emissions is weak. But the politics of shrinking dirty industries in rich countries could well mean that such taxes will be imposed anyway. In the long run this climate protectionism could hurt not only trade but also the environment. Trade promotes the adoption of newer, cleaner technology from rich countries by developing ones, improving the techniques of production in poorer countries gradually over time.

And all countries may find that the inevitable changes in weather patterns due to human activity will mean that meeting varied food needs domestically will become even more difficult. Open markets in agriculture, in particular, will be even more crucial in a world plagued by a changing (and more uncertain) climate than they are now. Keeping trade going will therefore help countries adapt to climate change. The risk is that border taxes are erected to protect energy-intensive industries in the rich world, badly hurting trade—and doing little to help the environment.


*“Reconciling Climate Change and Trade Policy”, by Aaditya Mattoo, Arvind Subramanian, Dominique van der Mensbrugghe and Jianwu He. World Bank Working Paper No. 5123, November 2009.

Default Times

Dec 3rd 2009
From The Economist print edition
What would happen if a member of the euro area could no longer finance its debt?

“FORD to City: Drop Dead.” That famous headline from the Daily News ran after President Gerald Ford refused to bail out New York City in October 1975, when the city was close to bankruptcy. Within weeks Ford relented. Behind the belated rescue lay a fear that default by New York would hurt the credit of other cities and states, and perhaps of America.

Similar worries are now being expressed about the euro zone, in light of the standstill request by Dubai World. That the firm had financial troubles was known, but many investors had assumed its debts were backed by the government of Dubai, and ultimately by Dubai’s oil-rich neighbour, Abu Dhabi. There are similar ambiguities within the euro bloc. If countries with rickety public finances, such as Greece, Ireland and Spain, ever found themselves unable to refinance their debt, would other euro members with deeper pockets rescue them? If not, would default by one euro-zone country threaten the viability of the euro itself?

There are few signs of an imminent funding crisis. Yields on the ten-year government bonds of Greece and Ireland, the two euro-zone countries with the largest budget deficits, are a bit below 5%. That is steep compared with the yield on German bonds, at 3.1%, but hardly indicates a buyers’ strike. When investors were last so nervous about the credit of the euro-zone’s periphery, in March, countries were still able to tap bond markets. Greece, for instance, raised €7.5 billion ($9.4 billion) in a single ten-year bond issue, though it had to offer a coupon of 6%.

The worry is that such appetite for risk may not last forever. The euro-zone countries with the most fragile public finances also have worryingly high unit-wage costs. Poor wage competitiveness makes it harder for them to grow quickly and to generate tax revenue. Currency devaluation is the usual remedy for that ill, but is not an option for countries locked inside the euro.

Though their troubles are similar, each country has faced them differently. Mindful of its large public debt, Italy refrained from using fiscal policy as an active tool to stimulate the economy, and has managed to keep its budget deficit below the euro-area average (see left-hand chart). Ireland has tightened fiscal policy by 4-5% of GDP since the crisis struck. A further tightening is expected when its 2010 budget is announced on December 9th, though uncertainty about the public cost of bailing out its banks remains. Spain is reversing its fiscal stimulus by raising its main value-added tax rate next year. As in Ireland, prices in Spain fell faster in the year to October than in the rest of the euro area, a sign of improving competitiveness (see right-hand chart).

The country that stands out as unrepentant is Greece. It ran persistent deficits even in good times. Its new government said in October that this year’s budget deficit would be more than twice as big as previously advertised. The government says it will cut the deficit to 9.1% of GDP next year. But pressure from euro-zone finance ministers for stronger action is building: they will meet in February to approve a new Greek plan to fix its finances, which must be submitted to the European Commission this month.

Some think any problem will be Greece’s alone. After all, the treaty that created the euro contains a “no bail-out” clause that prohibits one country from assuming the debts of another. That makes Greece’s public finances a matter between it and its creditors. Any promise, tacit or otherwise, of a bail-out by others would only encourage more profligacy (a view that mirrors Ford’s initial stance towards New York). In principle, a default by Greece or by any other euro-zone country would not threaten the euro any more than default by New York City in 1975, or California today, would mean the end of the dollar. Indeed, membership of the euro could help make debt-restructuring more orderly, since it would remove currency risk from the equation.

These arguments seem solid enough, but the tale of New York’s bail-out underlines how reality is never quite as tidy as theory. The city came perilously close to declaring bankruptcy when its pleas for a rescue by the federal government fell on deaf ears. What finally saved the city was anxiety about the fallout, in the form of possible bank insolvencies and borrowing costs for the rest of America, had it been unable to pay its debts.

New York’s federal bail-out was punitive, however. Some debtholders did not get their money back straight away, a technical default. The city had to cut public services, shed jobs, freeze pay, abandon capital projects and raise taxes to make sure it could pay back the federal loans. Such belt-tightening had proved necessary even in the months before the rescue. When it came, the president could claim that “New York has bailed itself out.”
The non-bail-out bail-out

It is easy to imagine a similar kind of hard bail-out, should a euro-zone country ever run short of cash. The terms of any deal would depend on the same balance of fears: on the one hand, the fear that trouble might quickly spread to a big country, such as Italy, or to euro-zone banks; on the other hand, the supplicant’s fear of being cut off from external finance. The process would be messy; some debts might not be paid on time. It would be hard to sell to voters in rescuing countries unless, as in New York’s case, the interest rates on bridging loans were punishingly high.

A tough-love bail-out would still need someone with deep pockets to provide the cash. Given the state of public finances even in more stable countries, such as France, that cannot be taken for granted. Germany is better placed but would be unwilling to act alone. Could a defaulter remain in the euro? It is hard to see how it could leave. A country that had just lost the trust of investors in its fiscal rectitude could scarcely build a credible monetary system from scratch. There is no obvious means to force a miscreant out, since euro membership is designed to be irrevocable. How badly the euro’s standing would be hurt by a default would depend on the state of public finances elsewhere: if America were struggling too, the dollar might not seem an attractive bolthole. If the current struggles with a strong euro are any guide, euro members might even half welcome a tarnished currency.

Forex Markets: Efficient or Not

WHEN pundits worry about the distorting effects of cheap money on asset prices, they invariably single out the carry trade as a cause for concern. The term is often used loosely to describe any investment that looks suspiciously profitable. More specifically it refers to a particular sort of foreign-exchange trading: that of borrowing cheaply in a “funding” currency to exploit high interest rates in a “target” currency. The yen has long been a favoured funding currency for the carry trade because of Japan’s permanently low interest rates. As a result of the crisis and near-zero rates in America, the dollar has become one, too.

If markets were truly efficient, carry trades ought not to be profitable because the extra interest earned should be exactly offset by a fall in the target currency. That is why high-interest currencies trade at a discount to their current or “spot” rate in forward markets. If exchange rates today were the same as those in forward contracts, there would be an opportunity for riskless profit. Arbitrageurs could buy the high-interest currency today, lock in a future sale at the same price and pocket the extra interest from holding the currency until the forward contract is settled.

In practice, the forward market is a poor forecaster. Most of the time exchange rates do not adjust to offset the extra yield being targeted in carry trades. So a simple strategy of buying high-yielding currencies against low-yielding ones can be rewarding for those that pursue it. The profits are volatile, however, and carry trades are prone to infrequent but huge losses. In late 2008 the yen rose by 60% in just two months against the high-yielding Australian dollar, a popular target for carry traders. That made it much more expensive to pay back yen-denominated debt.

If efficient-market theory cannot kill the carry trade, why don’t volatile returns, and the occasional massive loss, scare off investors? A new paper* by Ã’scar Jordà and Alan Taylor of the University of California, Davis, may have the answer. They find that a refined carry-trade strategy—one that incorporates a measure of long-term value—produces more consistent profits and is less prone to huge losses than one that targets the highest yield.

The authors first examine returns to a simple carry trade for a set of ten rich-country currencies between 1986 and 2008. Buying the highest yielder of any currency pair produced an average return of 26 basis points (hundredths of a percentage point) per month. That would be fine, except that the standard deviation of returns, a gauge of how variable profits are, was almost 300 basis points. The monthly Sharpe ratio that measures returns against risk was a “truly awful” 0.1 (the higher the ratio, the better the risk-adjusted performance). Worse still, the distribution of monthly profits was negatively skewed: big losses were more likely to occur than windfall gains.

No sane trader would follow a rule with such poor results. So the authors put together a far richer model to help decide which side of a currency trade to be on. It included things that are most likely to influence short-term movements in currencies, such as the change in the exchange rate over the previous month, as well as the size of the interest-rate and inflation gap between each currency. They found that all three factors mattered. Currencies that rose in one month tended to rise in the next month. Those with the highest interest rates went up most, as did currencies with high inflation (which drives expectations of further rate rises).

These impulses can drive exchange rates a long way from their fair or “equilibrium” values. That creates the risk of a sudden reversion that could wipe out earlier profits. To guard against this, the authors added to their model a measure of how far the exchange rate has shifted from its fair value. They found that this alarm bell can sometimes turn a “buy” signal into a “sell”.

This valuation check helped get rid of the negative skew associated with the simplistic version of the carry trade. But the authors thought the model could be improved still further. One worry was that although it makes sense for traders to buy currencies with fat yields, it may be dangerous past a certain point. After all, a high interest rate can be a symptom of a currency in distress. The authors judged that the link between profits and yields was likely to be “non-linear” (ie, its strength alters as the interest rate of the target currency climbs) and changed their model to reflect this. This non-linearity applies to currencies’ values, too: the likelihood of a crash escalates as a currency becomes ever dearer.
Sharpe thinking

The fully evolved model performed well compared with its primitive ancestor. Used in a portfolio of separate carry trades (to limit the volatility that comes with making a single bet), it delivered average monthly returns of 57 basis points, much better than the 26 basis-points profit from the simple approach. The Sharpe ratio based on annual returns was a very healthy 1.27. And returns based on the deluxe model had a positive skew: large windfalls were more likely than big losses. It appears that savvy investors can indeed make sustained profits from the carry trade.

The authors stress that their sophisticated approach was scarcely better than the simple one at predicting the direction of exchange rates. The crucial advantage is that where it made mistakes, the stakes were small. The deluxe model might tell a trader to turn his nose up at a trade with apparently strong returns because of the risk of a currency crash. The trade might well turn out to be profitable but the forgone profit is a small price to pay for avoiding a potentially big loss. It is a lesson that applies to other asset markets, including those for shares, bonds and homes. Trading momentum will often drive up asset values for long periods and persuade buyers that high prices can be justified. But investors ignore fair-value measures at their peril.

* “The Carry Trade and Fundamentals: Nothing to Fear But FEER Itself” by Ã’scar Jordà and Alan Taylor, November 2009

How Long Would Toyota's Reign Last?

The Economist delves into the state of the company.

Losing its shine

Dec 10th 2009
From The Economist print edition

Unless Akio Toyoda can find an answer to Toyota’s problems, the Japanese company’s reign as the world’s biggest carmaker may be brief



IT IS not unusual in Japan for corporate leaders to make semi-ritualised displays of humility. But when Akio Toyoda, president of Toyota Motor Corporation since June and grandson of the firm’s founder, addressed an audience of Japanese journalists in October his words shocked the world’s car industry.

Mr Toyoda had been reading “How the Mighty Fall”, a book by Jim Collins, an American management guru. In it, Mr Collins (best known for an earlier, more upbeat work, “Good to Great”) describes the five stages through which a proud and thriving company passes on its way to becoming a basket-case. First comes hubris born of success; second, the undisciplined pursuit of more; third, denial of risk and peril; fourth, grasping for salvation; and last, capitulation to irrelevance or death.

Only 18 months ago Toyota displaced General Motors (GM), a fallen icon if ever there was one, as the world’s biggest carmaker. But Mr Toyoda claimed that the book described his own company’s position. Toyota, he reckoned, had already passed through the first three stages of corporate decline and had reached the critical fourth. According to Mr Collins, fourth-stage companies that react frantically to their plight in the belief that salvation lies in revolutionary change usually only hasten their demise. Instead they need calmness, focus and deliberate action.

Is Toyota really in such dire straits? And if it is, can a company that for decades has been the yardstick for manufacturing excellence turn itself around in time?

A reliable engine stalls

In many ways, Mr Toyoda is right to sound the alarm. Toyota could not have been expected to shrug off the storm that swept through the car industry after the collapse of Lehman Brothers in September last year; but rivals, notably Volkswagen (VW) of Germany and Hyundai Kia of South Korea, have weathered it far better. In the past Toyota went on racking up profits in booms and recessions alike. Not this time.

In the financial year that ended in March, amid admittedly the worst sales slump in the industry’s modern history, Toyota made a net loss of ¥437 billion ($4.3 billion), its first since 1950. Even more startling, the former cash machine (it had rung up a record profit of ¥1.7 trillion the year before) managed to lose ¥766 billion in the three months to March alone—the equivalent of $2.5 billion more than GM did in the same period as it hurtled towards bankruptcy. Toyota expects to lose ¥200 billion this year. But for belated cost-cutting measures and falls in raw-material prices, the forecast would be worse.

Some analysts think that is conservative, because sales in America and Japan appear to be recovering slowly and costs are being slashed further (the company says it is shooting for “emergency profit improvements” of around ¥1.25 trillion). In the most recent quarter Toyota made a surprise net profit of ¥58 billion. It also raised its sales forecast for the year from 6.6m units to 7m. Much, however, depends on the yen-dollar exchange rate. The yen has been climbing, and a rise of ¥1 can subtract ¥30 billion from Toyota’s bottom line.

What should be worrying Mr Toyoda more than the firm’s short-term financial position—its cash pile is an enviable ¥2.65 trillion—is the loss of its once seemingly unstoppable market-share momentum. In 2002 the then president, Fujio Cho, declared that Toyota was aiming for 15% of the global market by 2010. It chased volume at almost any price. By 2007 Toyota’s sales had reached nearly 9m cars, 13.1% of the world total. Last year that share was stable, but this year it seems likely to fall to 11.8% (see chart 1). It has been flat or falling in every important region except Japan, where it has benefited from generous tax breaks on hybrid vehicles, in which it is stronger than its domestic rivals.

In America, Toyota’s largest and hitherto most profitable market, its share has stayed at around 16.5%, hardly a brilliant performance given Detroit’s long, dark night of the soul. So far this year its sales are down by nearly a quarter—a figure not as dreadful as GM’s, but much worse than VW’s and worse even than Ford’s. Hyundai’s sales went up (see chart 2).

In Europe, Toyota’s share was the lowest since 2005. Most worrying, after several good years it fell back in China, not only the world’s fastest-growing car market but now also its biggest. Toyota lost more than two points of market share, the worst performance of the 24 brands on sale in the country (see chart 3). In Brazil and India, Toyota scraped along with little more than 2% of either market.

Toyoda’s to-do list

There is plenty here to concern Mr Toyoda. The first is that for a global carmaker Toyota has been slow off the mark in several emerging markets that are likely to provide nearly all the growth in sales when the mature markets of America, western Europe and Japan have recovered to something like normality. VW is far ahead of Toyota in China and out of sight in Brazil. GM, for all its difficulties, is still doing better than Toyota in China and sells nearly ten times as many vehicles in Brazil. Hyundai almost overtook Toyota in China this year and is the biggest foreign car brand in India. Toyota’s first low-cost car designed especially for the price-sensitive Indian market is still a year away.

The second thing that Mr Toyoda should reflect upon is that Toyota is sluggish for different reasons in different markets. This may make answers harder to find. In China, it took longer than rivals to respond to tax breaks for vehicles with smaller engines and it has made less effort to develop cars specifically for the Chinese market. In Europe, the solid but ageing Yaris and the dull Auris left it poorly placed to exploit the scrappage schemes that boosted sales, and its lack of a full range of competitive diesels continues to hinder it.

In America, Toyota is still hugely powerful. It sells more cars there than anyone (the Detroit Three remain highly dependent on big pickups and sport-utility vehicles), it leads in small trucks and it has the bestselling luxury brand in Lexus. But it has also been clobbered by an avalanche of bad publicity, after the recall of 3.8m Toyota and Lexus vehicles. The recall was prompted by the crash of a Lexus saloon in which a California Highway Patrol officer and his family were killed. The apparent cause was “unintended acceleration”.

At first the National Highway Traffic Safety Administration (NHTSA) and Toyota thought that a badly fitting floor mat could have jammed the accelerator open. Both still think that probable. But the NHTSA is continuing its investigation, having received more than 400 complaints about acceleration problems that appear to have been responsible for several fatal accidents. It is now focusing on possible problems with the design of the throttle pedal and the vehicles’ electronics. On November 25th Toyota announced that it would reshape the suspect pedals or fit redesigned ones in 4.26m vehicles. Some will also get reshaped floor-pans and a brake-override system.

America’s ever-eager plaintiff lawyers already have Toyota in their sights. A Californian law firm specialising in customer class-action suits, McCuneWright, filed a suit on November 5th. Citing 16 known deaths and hundreds of injuries, it alleged that “neither driver error nor floor mats can explain away many other frightening instances of runaway Toyotas.”

Almost every carmaker has had to contend with recalls and ambulance-chasing lawyers, but in a place as litigious as America the reputational damage can be severe. Audi (part of the VW Group) has taken more than 20 years to recover from reports of unintended-acceleration allegations that ultimately proved to be groundless.

In another class-action suit, triggered by a former employee, a corporate lawyer named Dimitrios Biller, Toyota is accused of trying to cover up evidence that it knew some of its vehicles could be deadly in rollover accidents. These were not high-sided SUVs, which are prone to rolling over, but its bestselling Camry and Corolla saloons. The company has raised questions about Mr Biller’s veracity and employment record, but the allegations have not gone away. The suggestion that squeaky-clean Toyota’s behaviour may have resembled that of Ford and GM, which in the distant past covered up problems with the Pinto and the Corvair, is especially wounding.

Last month Toyota’s standing was dealt a further blow. The Insurance Institute for Highway Safety, a car-safety research group funded by insurers, announced its highest-rated cars and SUVs for 2010, having added a rollover roof-strength test this year. Not one of the 27 vehicles it chose was a Toyota. The company called this finding “extreme and misleading”.

The danger in all of this for Toyota is that its loyal (and mostly satisfied) customers in America have long believed that the firm was different from others and thus hold it to a higher standard. The moment that Toyota is seen as just another big carmaker, a vital part of the mystique that has surrounded the brand will have been rubbed away.

Another part of that mystique has also suffered some scratches. Just as Cadillac used to be synonymous with luxury and BMW with sportiness, Toyota was a byword for quality and reliability. A few years ago its crown slipped when a number of quality problems surfaced. In July 2006, after a spate of well-publicised recalls, Katsuaki Watanabe, Mr Toyoda’s immediate predecessor, bowed in apology and promised to fix things with a “customer first” programme that would redirect engineering resources and, if necessary, lengthen development times.

However, the recalls continued and Toyota started slipping in consumer-quality surveys. A year later Consumer Reports, an influential magazine, dropped three Toyota models from its recommended list. The magazine added that it would “no longer recommend any new or redesigned Toyota-built models without reliability data on a specific design”.

People within the company believe these quality problems were caused by the strain put on the fabled Toyota Production System by the headlong pursuit of growth. Toyota now looks as though it has been largely successful in solving them. In the latest annual reliability study published by Consumer Reports, Toyota boasted 18 of the 48 leading vehicles. Honda, the next best, had only eight.

The report, however, also contained less welcome news. Ford vehicles, long among the also-rans, are now showing “world-class reliability”. To back up the claim, Ford’s highly praised new Fusion beat not only the Camry but also its main rival, the Honda Accord, as the best in the hugely important mid-size segment. In an annual study of the dependability of three-year-old vehicles, J.D. Power, an automotive consultancy, placed Buick (a GM brand) and Jaguar joint first, ahead of both Lexus and Toyota.

For years Toyota has been the quality benchmark for every carmaker, but at the very moment it faltered, others were finally catching up. The truth is that although a few fail to make the grade—Chrysler still has a lot of catching-up to do—most cars these days are extraordinarily well-made. The quality surveys by which buyers used to set such store are now based on minute differences. This is the main reason why the manufacturers’ positions in the league tables have become increasingly volatile.

If Toyota can no longer rely on its superior quality to give it an edge, its vehicles will inevitably be judged increasingly on more emotional criteria, such as styling, ride, handling and cabin design. In America, Toyota is likely to face much more consistent competition from at least two of Detroit’s Big Three, while both Hyundai and VW are starting to snap at its heels. The South Korean company has put on an astonishing spurt this year, adding about two points of market share to take it to 7.2%. Its Lexus-rivalling Genesis saloon was named North American car of the year. In 2010 it will start selling the new Sonata, which looks like being a great improvement over the old model, aiming it squarely at the Camry.

And whereas Toyota’s sales have fallen by 23.8% in America so far this year, VW’s sales have dropped by only 6.6%. In 2011 VW will start making cars in America after a break of more than 20 years. The first car out of the factory in Chattanooga, Tennessee, will be a saloon specially designed for the American market. It too will take on the Camry. VW is planning to double its sales in America by 2018, to around 800,000. Though far short of the record 2.6m vehicles Toyota sold in America in 2007, this is a sign of the German group’s intent.

The relentless pace at which VW continues to churn out an unending succession of new models across its unmatched stable of brands, each one keenly priced and brimming with showroom appeal, has shaken the rest of the industry, Toyota included. VW is laying plans that it believes will sweep it past Toyota to become the world’s biggest carmaker within a decade. Even now, it is not far behind, although this year it has been helped by its geographic sales pattern compared with Toyota’s. This week VW said it would buy a stake of 19.9% in Suzuki, a Japanese car- and motorcycle-maker that dominates the Indian market through Maruti, its local subsidiary (see article).

Pizzazz, please

How will Toyota respond? Publicity-shy Toyota executives hate announcing detailed strategies to the outside world. Nor have many of them yet come to terms with Mr Toyoda’s urgency and appalling frankness. Uniformly they spout that his words about the firm “grasping for salvation” were widely misunderstood. But for all that, there is plenty going on behind the scenes beyond ferocious cost-cutting. Upon seizing the reins in June, Mr Toyoda immediately ordered a back-to-basics overhaul of product development across the firm’s global operations.

One conclusion was that Toyota should be more ruthless in exploiting its early leadership in commercialising hybrid systems and electric-vehicle technology. Although every other big carmaker is launching new hybrids (including plug-ins) and purely battery-powered vehicles, or is preparing to, Toyota is convinced that it is still ahead of the pack. Within a few years there will be a hybrid version of every car Toyota makes and there are plans to extend the Prius brand to cover a range of innovative low- and zero-emission vehicles.

Another conclusion—and possibly a more radical notion—was that Toyota must stop making so many dull cars with all the appeal of household appliances. Importantly, Mr Toyoda is what is known as a “car guy”, a part-time racer and an enthusiast for cars that are designed with passion to engage the right-side as much as the left-side of the customer’s brain. At the Tokyo motor show in October he said pointedly: “I want to see Toyota build cars that are fun and exciting to drive.”

Bloomberg Morizo in a suit

As Morizo, the alter ego under which he blogs, Mr Toyoda went further. He said of the cars at the show: “It was all green. But I wonder how many inspired people to get excited. Eco-friendly cars are a prerequisite for the future, but there must be more than that.” After trying VW’s hot Scirocco coupé in July, he blogged: “I’m jealous! Morizo cannot afford to lose. I will tackle the challenge of creating a car with even more splendid flavour than the Scirocco.” His favourite metaphor is that Toyota’s engineers should be like chefs, seasoning their cars with tantalising flavours.

He still has some way to go. As Car magazine observed recently: “Excepting the small cars and the Prius, Toyota’s European range is as appetising as an all-you-can eat tofu buffet.” Strategic Vision, a market-research firm based in San Diego that studies the factors that drive both the choices car-buyers make and subsequent owner satisfaction, publishes an annual “Total Value Index” covering 23 different categories of vehicle. In this year’s study, which was based on feedback from 48,000 buyers, for the first time Toyota had no winners at all. The authors of the study concluded that other carmakers had caught up with Toyota on quality while offering products that inspired greater “love”.

There is also only so much that one man can do to shift the culture of a vast organisation. But there is nothing engineers like more than to be challenged, and Toyota employs many of the world’s finest. The latest, third-generation Prius and the brilliant little iQ city car show what they are capable of. So, in a very different way, does the 202mph (325kph) Lexus LFA. Kaizen, the pursuit of continuous improvement, is, after all, embedded deep in Toyota’s DNA and only needs prodding.

The test will be to keep the ingredients that have made Toyota great—the dependability and affordability—while adding the spice and the flavours that customers now demand. It will not be easy, and the competition has never looked more formidable. But by recognising the scale of Toyota’s problems, by proclaiming their urgency and then by drawing on the firm’s strengths to fix them, Mr Toyoda has already taken the first, vitally important, step towards salvation.

A Look Inside Buffett's Battery Bet

Brilliantly illustrates the canny abilities of Warren Buffett

"The People’s Professor"

Prakash Loungani profiles Joseph Stiglitz

The Nobelist identifies Key mistakes that ed to the Crisis.

Wednesday, October 28, 2009

Restructuring ING

Dramatic shakeup

SuperFreakonomics

Steven Levitt has been getting lot of heat over his new book 'SuperFreakonomics'

Let's look at the criticism they are facing and their defence.

http://www.standupeconomist.com/blog/economics/climate-change-in-superfreakonomics/


http://krugman.blogs.nytimes.com/2009/10/17/superfreakonomics-on-climate-part-1/

http://freakonomics.blogs.nytimes.com/2009/10/17/the-rumors-of-our-global-warming-denial-are-greatly-exaggerated/

http://krugman.blogs.nytimes.com/2009/10/17/weitzman-in-context/

http://freakonomics.blogs.nytimes.com/2009/10/23/the-superfreakonomics-global-warming-fact-quiz/

http://freakonomics.blogs.nytimes.com/2009/10/18/global-warming-in-superfreakonomics-the-anatomy-of-a-smear/

http://online.wsj.com/article/SB10001424052748704335904574495643459234318.html

A Defence of Insider Trading

Relying upon competition and the self-interest of shareholders and creditors (both actual and potential) to discover which types of information are proprietary—and, hence, protected from insider trading—and which types of information are not proprietary removes politics from this vital task. Importantly, it also replaces the unreliable judgments and "best guesses" of political officials with the much more reliable determinations of competition.

Is the U.S. Economy Turning Japanese?

Mr. Wood, an equity strategist for CLSA Ltd. in Hong Kong says no.

The Post-Gracious President

Not so hopeful President afterall.

Historical Lessons

Food aid bad, but food aid essential: an Ethiopian tragedy

Monday, October 12, 2009

And the 2009 Economics Nobel Goes to....

Wonky Explaination



Michael Spence, chairman of the Commission on Growth and Development, 2001 Nobel laureate for economics and a senior fellow at Stanford's Hoover Institution explains the contribution of this year's Nobel laureates adding that Markets aren't everything.


The Nobel Prize this year recognizes two distinguished scholars, Elinor Ostrom and Oliver Williamson, one a political scientist and the other an economist. The press has noted that Professor Ostrom is the first woman to receive the prize in economic sciences. This is to be celebrated, as is the likelihood that we can anticipate many more to follow in the years to come. Women in many subfields of economics are now among the intellectual leaders and innovators.

The common theme underlying the prize this year is that markets do not solve all problems of resource allocation and incentives well or even at all. That is not a new idea. What is important is that people and societies find ways through organizational structures and arrangements, political and other institutions, values, incentives and recognition, and the careful management of information, to solve these problems. Professors Ostrom and Williamson have led the development of this increasingly important part of economics. In reading their work, you are impressed that economics is not really fundamentally about markets, but about resource allocation and distribution problems. Markets appear because they operate effectively to handle a subset of these resource allocation challenges. Alternative creative institutional arrangements have been devised and refined over time to deal with those that markets handle imperfectly.


Sometimes those institutional arrangements include "creating" markets as has been done with some effluents that affect air quality. But often this approach is impossible or impractical because of monitoring and other costs.

The deeper insight that these scholars have helped us to come to understand is that there are many circumstances in which non-cooperative outcomes (nash equilibria) are deficient or sub-optimal, and that a good part of economic and social progress lies in the creative design of institutions whose purpose is to cause these non-cooperative equilibria to come closer to socially and economically efficient and fair results.

Climate change is a commons problem on a global scale with the added complication that collective action is designed not directly to produce results (in the sense of temperature reduction), but rather to acquire tail insurance by shifting the probability distributions against outcomes that are highly destructive but not certain to occur. Though some are not convinced this is a problem worth acting on, a majority globally recognize that there are risks to be taken seriously. This may be the most complex commons problem we have yet faced. We are in the midst of shifting values with respect to energy efficiency and clean technology. The challenge is to design institutions, mechanisms and incentives that move us in the right direction.

But many would argue that the most challenging problems are those associated with knowledge and information. Here Professor Ostrom and Williamson have had a major impact on our understanding.

Knowledge is, for the most part, the shared (read commons) intangible asset on which growth, development and prosperity is based. Though there are proprietary market add-ons designed to create or enhance incentives for innovation, the broad corpus is augmented, shared and transmitted through an extraordinarily complex and evolving set of institutional arrangements that include educational institutions, firms, multinational organizations, and governments. It has long been known that knowledge is an "unusual" economic commodity, in that if you have it and you give or transmit it or even sell it to me, then we both have it. The costs of creating the knowledge are a lot higher than the costs of sharing it. If it is priced efficiently to cause the sharing to occur, the incentives to produce it may be damaged, and vice versa.

Solutions have combined the inculcation of values, trust, incentives, partial use of market mechanisms, institutions and public sector investments. It is a work in progress as new challenges arise, but also an extraordinary success story.

George Akerlof received the Nobel Economics Prize for his analysis of how markets perform when there are asymmetric informational gaps and private information that is not easy to share. His persuasive short answer was "quite badly" with great insight as to why. One could leave it there. But of course, as this years Nobel Prize winners have taught us, institutions including business firms are created in part to solve these resource-allocation problems where markets fail. They do this by changing the informational and incentive structures that plague market performance. Professor Williamson has been at the forefront of understanding these processes, in asking what part of resource allocation is done within the firm and when is the process turned over to the market, or as the Nobel Prize Committee put it, what determines the boundary between the firm and the market.

The prizes this year can be celebrated in recognizing two highly original scholars and in so doing, highlighting the important parts of economics, political science and political economy that they have done so much to build.

Update: Paul Romer's praise


What happened to global warming?

From BBC News


This headline may come as a bit of a surprise, so too might that fact that the warmest year recorded globally was not in 2008 or 2007, but in 1998.

But it is true. For the last 11 years we have not observed any increase in global temperatures.

And our climate models did not forecast it, even though man-made carbon dioxide, the gas thought to be responsible for warming our planet, has continued to rise.

So what on Earth is going on?

Climate change sceptics, who passionately and consistently argue that man's influence on our climate is overstated, say they saw it coming.

They argue that there are natural cycles, over which we have no control, that dictate how warm the planet is. But what is the evidence for this?

During the last few decades of the 20th Century, our planet did warm quickly.

The Sun (BBC)
Recent research has ruled out solar influences on temperature increases

Sceptics argue that the warming we observed was down to the energy from the Sun increasing. After all 98% of the Earth's warmth comes from the Sun.

But research conducted two years ago, and published by the Royal Society, seemed to rule out solar influences.

The scientists' main approach was simple: to look at solar output and cosmic ray intensity over the last 30-40 years, and compare those trends with the graph for global average surface temperature.

And the results were clear. "Warming in the last 20 to 40 years can't have been caused by solar activity," said Dr Piers Forster from Leeds University, a leading contributor to this year's Intergovernmental Panel on Climate Change (IPCC).

But one solar scientist Piers Corbyn from Weatheraction, a company specialising in long range weather forecasting, disagrees.

He claims that solar charged particles impact us far more than is currently accepted, so much so he says that they are almost entirely responsible for what happens to global temperatures.

He is so excited by what he has discovered that he plans to tell the international scientific community at a conference in London at the end of the month.

If proved correct, this could revolutionise the whole subject.

Ocean cycles

What is really interesting at the moment is what is happening to our oceans. They are the Earth's great heat stores.

Pacific ocean (BBC)
In the last few years [the Pacific Ocean] has been losing its warmth and has recently started to cool down

According to research conducted by Professor Don Easterbrook from Western Washington University last November, the oceans and global temperatures are correlated.

The oceans, he says, have a cycle in which they warm and cool cyclically. The most important one is the Pacific decadal oscillation (PDO).

For much of the 1980s and 1990s, it was in a positive cycle, that means warmer than average. And observations have revealed that global temperatures were warm too.

But in the last few years it has been losing its warmth and has recently started to cool down.

These cycles in the past have lasted for nearly 30 years.

So could global temperatures follow? The global cooling from 1945 to 1977 coincided with one of these cold Pacific cycles.

Professor Easterbrook says: "The PDO cool mode has replaced the warm mode in the Pacific Ocean, virtually assuring us of about 30 years of global cooling."

So what does it all mean? Climate change sceptics argue that this is evidence that they have been right all along.

They say there are so many other natural causes for warming and cooling, that even if man is warming the planet, it is a small part compared with nature.

But those scientists who are equally passionate about man's influence on global warming argue that their science is solid.

The UK Met Office's Hadley Centre, responsible for future climate predictions, says it incorporates solar variation and ocean cycles into its climate models, and that they are nothing new.

In fact, the centre says they are just two of the whole host of known factors that influence global temperatures - all of which are accounted for by its models.

In addition, say Met Office scientists, temperatures have never increased in a straight line, and there will always be periods of slower warming, or even temporary cooling.

What is crucial, they say, is the long-term trend in global temperatures. And that, according to the Met office data, is clearly up.

To confuse the issue even further, last month Mojib Latif, a member of the IPCC (Intergovernmental Panel on Climate Change) says that we may indeed be in a period of cooling worldwide temperatures that could last another 10-20 years.

Iceberg melting (BBC)
The UK Met Office says that warming is set to resume

Professor Latif is based at the Leibniz Institute of Marine Sciences at Kiel University in Germany and is one of the world's top climate modellers.

But he makes it clear that he has not become a sceptic; he believes that this cooling will be temporary, before the overwhelming force of man-made global warming reasserts itself.

So what can we expect in the next few years?

Both sides have very different forecasts. The Met Office says that warming is set to resume quickly and strongly.

It predicts that from 2010 to 2015 at least half the years will be hotter than the current hottest year on record (1998).

Sceptics disagree. They insist it is unlikely that temperatures will reach the dizzy heights of 1998 until 2030 at the earliest. It is possible, they say, that because of ocean and solar cycles a period of global cooling is more likely.

One thing is for sure. It seems the debate about what is causing global warming is far from over. Indeed some would say it is hotting up.