Financial Arbitrage

Last month’s column introduced the concept of arbitrage in which an asset is bought and sold near-simultaneously (the duration for which the asset is held can widely range, depending on the market perspective) in two different markets with the profit derived from the price differential.  Arbitrage functions to equalize price gradients across the market landscape, indirectly communicating information between buyers and sellers, thereby leading to a more efficient economy.   Of course, the parties engaged in arbitrage don’t set out to perform a useful service, they want to get incredibly rich, but seeking profit for themselves produces, essentially as a by-product, a societal good.  Basically, their savviness in producing a profit ensures that they will look for arbitrage opportunities with a diligence and innovativeness that someone simply hired for the job would never match.

The place where this ‘goodness’ is most fully on display is the financial market where likely billions are made in arbitrage each day and where the erasure of gradients across the economy serve the most people.  It is within this context, that this month’s column explores the concept of how to price a security or capital instrument so as to maximize profit and minimize risk.

To this end, this analysis will briefly explore two models: the Capital Asset Pricing Model (CAPM) and the Arbitrage Pricing Theory (APT).

Because a security’s price is essentially negotiated between the buyer and the seller at the time of the transaction and is not set by some outside force (e.g. Fred’s or Joe’s market in last month’s banana example), it is distinctly possible for an arbitrage opportunity to fail to net a profit.  In other words, despite the classical analysis to the contrary, arbitrage activities have risk.  How much should an investor be willing to pay to buy the asset and how much he can reliably sell it for become incredibly important.

In some sense CAPM is a special case of APT and, as a result both models share similar mechanics and strategies for minimizing risk while maximizing profit.  Let’s deal with the mechanics first.

In a financial arbitrage, the party engaged in the arbitrage (called an arbitrageur) first identifies a mispriced asset.  If the asset is too expensive, he sells it and uses the proceeds to buy another assets.  If the asset is too cheap, he sells something else and uses the proceeds to buy the cheaper security.  In both cases, a sense of relative pricing attaches when deciding which asset goes where.   In an ideal situation, both assets will be mispriced but it is likely that the arbitrageur has to settle for just one.  The purchased asset is then held for some time until it is relatively overpriced, at which point it provides the working fund for the next transaction.  It is important to understand that the sells that the arbitrageur enacts are typically short sells.

The strategy clearly centers around the identification of a mispriced asset relative to the market as a whole but since the asset is held for some time, called the period, the key feature is comparing the rate of return of the asset relative to other assets.  The measure of relative fitness is based on the response of the asset’s price to a host of systemic, macroeconomic risks, such as inflation, unemployment, and so on.  For each of these risk factors the risk-free rate of return of the asset is modified by a set of linear corrections.  In the abstract, this modification results from the following equation (adapted from Arbitrage Pricing Theory (APT) by Adam Hayes)

RA = Rfree + β1 ( P1 - Rfree ) + β2 ( P2 - Rfree ) + ...
&nbsp = Rfree + β1 RP1 + β2 RP2 + ...

where:

  • RA is the expected rate of return of the asset in question,
  • Rfree is the rate of return if the asset had no dependence on the identified macroeconomic factors (free rate of return),
  • βi is the sensitivity of the asset with respect to the ith macroeconomic factor, and
  • Pi is the additional risk premium associated with the ith macroeconomic factor with RPi = Pi - Rfree being the actual risk premium.

As in most things, it is much easier to understand this model with a concrete example (derived from Hayes’s article).  Consider an asset that depends on the following four macroeconomic factors (i.e  i = 4):

  • Gross domestic product (GDP) growth
  • Inflation rate
  • Gold prices
  • and the return on the Standard and Poor’s 500 index

Historic data are typically analyzed, according to the available literature, via a linear regression.  This process not only identifies the preceding four factors as the most important it also gives values for the sensitivity factor  and the premiums for each.  Assuming a free rate of return  = 3%, the data conveniently present themselves in the following table:

Macroeconomic Factor Sensitivity factor β Additional Premium P Risk Premium RP = P - Rfree β RP
GDP Growth 0.6 7% 4% 2.4%
Inflation 0.8 5% 2% 1.6%
Gold prices -0.7 8% 5% -3.5%
S&P 500 1.3 12% 9% 11.7%

Adding up each value in the last column and then adding the result to Rfree gives a value for the asset of RA = 15.2 %.

The list of APT macroeconomic factors commonly used include the ones listed above as well as corporate bond spread, shifts in the yield curve, commodities prices, market indices, exchange rates, and a host of others.  Basically, any factor in the economy as a whole that effects all assets should figure in as there is no way to mitigate these risks by diversification.

In the above example, the RP parameters were assumed a priori.  In his article Arbitrage Pricing Theory: It’s Not Just Fancy Math, Elvin Mirzayev walks through how to simultaneously solve for the βs to get what we are really after, the intelligently-derived expected return on the asset. (CFI’s Arbitrage Pricing Theory has a similar example that complements the previous presentation – financial gurus aren’t often clear in their explanations and having multiple sources helps.)  Once that is obtained, it is compared to the offered rate and, when the two differ sufficiently, the asset is ripe for arbitrage.

The Wikipedia article on APT and Mirzayev’s piece discuss the importance of developing a portfolio of assets against which to compare but these nuances, while important in the day-to-day implementation, don’t blunt the general idea of APT – namely that the value of an asset (as determined by its return) depends on various factors and can only be judged in relation to the market as a whole.

The CAPM differs primarily from APT by its use of a single factor (a single β) to capture the systemic market risk.  This aspect of the CAPM means that it assumes markets are perfectly efficient.  It isn’t as accurate but it is much easier to use and this one feature explains its staying power.

One final note, the devil really is in the details for much of this work.  In particular, it doesn’t seem as if there is a well-known discussion of the numerical stability of these results.  Given that the linear-regressions (typically multi-variate) are used to determine the betas and, consequently, the risk premiums, there seems to be room to determine just how much additional risk is buried within the algorithm.  But that is a blog for another day.

Market Inefficiencies and Arbitrage

Arbitrage: it isn’t an often-heard word when discussing the economy.  In fact, I consulted the indices of 6 textbooks in economics, covering micro or macro or both, and ranging from mostly qualitative to strongly mathematical, and found not a single entry; but its importance to markets can be hardly overstated.  In order to understand why this is, we need to first think a little about how markets work and the role of information in the marketplace.

A key observation is that markets work most efficiently when they are at a natural equilibrium, and their approach to equilibrium or even the equilibrium they assume can be impeded by insufficient information about the goods and services being sold.

For example, in Chapter 18 of his book Principles of Economics: Economics and the Economy Version 2.0, Timothy Taylor discusses how imperfect information can impede economic participation in each of the markets for goods and services, labor, and finance.  A person seeking to buy a used car is naturally wary about the quality of the car, about which they know very little and the seller knows far more.  An employer looking to hire a new employee is also naturally wary about the quality of the employee, because all that he can discern comes from a résumé and an interview.  (As a side note, this is why the coding interview, in which prospective computer programmers are given real problems to solve, exists as a hiring gate.)  Finally, a person seeking a loan from a bank has to contend with the bank’s inherent skepticism about the soundness of their repayment prospects, even if the person has an impeccable character where borrowing money is concerned.

These reluctances serve to slow down economic participation, push the equilibrium away from where it would sit in a market with perfect knowledge, and can lead to unintuitive situations where raising prices can actually raise demand rather than the other way around (that, however, is a post for another day).  Collectively, economists term all these ‘non-ideal’ market behaviors as inefficiencies.

A sad but powerful example of the kind of havoc uncertainties can wreak is summarized in Jamie Goldberg’s article Downtown Portland businesses, derailed by pandemic, say protests present a new challenge.  In the article, Goldberg quotes Andrew Hoan, president and CEO of Portland Business Alliance, as saying of downtown Portland:

It’s unique, it’s boutique, it has the best of all kinds of experiences for customers and for employees and for employers, and it’s devoid of that now because of the uncertainty.

Markets have developed lots of different ways of dealing with inefficiencies and the risks that follow.  Some of the more well-known ones are guarantees, certifications, and insurance and premiums.  Interest rates on loans are structured to provide the lender some insurance against the default of the loan as seen in the usual formula:

Interest Rate = Risk Premium + Expected rate of inflation + Time value of money

The last two terms collectively account for the simple fact that a dollar spent today provides more utility than a  dollar spent tomorrow because 1) inflation eats away at the purchasing power of money (‘Expected rate of inflation’ term) and 2) the enjoyment derived from a good or service is less when one has to wait for it (‘Time value of money term representing delayed gratification).  Since both of these effects are known beforehand, they attach to any transaction.  The first term (‘Risk Premium’) represents all of the uncertainty brought on by the lack of knowledge about the transaction (does the good have high quality? is the borrower going to pay it back? and so on).

The mechanism of arbitrage is another powerful way for the markets to deal with some of these inefficiencies by making it profitable for traders to equalize information between all parties.  It just isn’t as broadly familiar.

In a nutshell, arbitrage is the purchase and subsequent sell of some good (typically called an asset) in order to profit from a positive difference between the final market’s price and the asset’s price in the original market.

In theory, the exercise of arbitrage offers zero risk because the resell is instantaneous and the receiving market can accommodate the amount being resold.  In reality, nothing is truly risk free, and a number of complications can arise that blunt the attractiveness of arbitrage.

For example, suppose that bananas sold for $1.00/pound in Joe’s Market but $1.40/pound in Fred’s Market elsewhere in town.  Then a person can possibly make money by purchasing a supply of bananas at Fred’s and transporting them to Joe’s market.  In this fashion arbitrage eliminates or, at least, helps to lessen imbalances in the economy caused by a lack of information (since if shoppers knew they could get bananas cheaper at Joe’s than Fred’s they would, all other things being equal, shop for bananas at Joe’s).  Arbitrage also facilitates a better match between supply and demand, again smoothing out imbalances caused by lack of information and other factors.  However, it is important to realize that arbitrage is distinct from distribution by a middle man even if they share some aspects.

Many real world factors contribute to making this typical introductory example more complicated than it might seem at first glance.  The primary complication is that the time needed to purchase, transport, and subsequently resell the goods must make it worthwhile engaging in this form of arbitrage.  The profit earned on the resell must be great enough to outweigh the transportation cost, regulatory fees, and the opportunity costs in order for people to engage in it.  These barriers are why we don’t typically see parties engaged in retail arbitrage.

As the internet-of-things has made the flow of information incredibly easier, it is now possible to find people talking about their retail arbitrage efforts moving product from brick-and-mortar shops for resell on Amazon and Ebay.

Of course, retail arbitrage is still a rare thing not only because of the resell risk but mostly there are more efficient ways for most of us to make money without the ‘hustle’.  Far more common and more important is the use of arbitrage in a macroeconomic setting where it is used smooth out inefficiencies in the financial markets.

In the coming months, this column will explore some of the aspects of arbitrage in the macroeconomic setting, how arbitrage activities tend to cause prices in different markets to converge, and what may happen when arbitrage opportunities are frustrated.

The Business of Comics

Economists long for those perfect case studies that work so well in illustrating an abstract point of economic theory by presenting a real-world example full of human drama and marketplace interactions.  And, while it is still too early to tell whether the latest news from the comic book industry will become a featured story in econ textbooks years from now, it certainly has all the fixings.

The news centers on an interesting development that just reared its ugly head in that microcosm of the entertainment world that produces the majority of the world’s comic books (or sequential art for the more refined): after a 25-year exclusive relationship, DC Comics has decided to part ways with Diamond Comic Distributors.

To understand the ramifications involved, a history of how comic books are sent to the market is in order.

For much of their history in the United States, comic books were considered as just another periodical, and the primary method by which they were sold to the end consumer was through the newsstand.  The newsstand owner negotiated quantities with the publisher and distributor months in advance.  He displayed the books for a time, and split the revenue with the partners.  Since the units were provided on consignment, any unsold copies were then returned to the publisher (usually, to save on shipping, only the covers were returned with the proviso that the newsstand destroy the interiors).

Marvel Comics began to explore the concept of a direct market for comics in the late 1970s under the direction of Jim Shooter.  The primary innovation was that the comic books were sold to a store well below cover price and the store would subsequently sell the product at cover price netting, as revenue, the price differential.  Since the store owned the product outright, any unsold units remained in its possession.  The genesis of the direct market is marked in late 1980, with Dazzler #1 being the first regular, monthly comic being sold exclusively in the direct market.

The store generated a greater profit per unit than at the newsstand, but it also ran the risk of buying a product no one wanted that languished on the shelf.  Comics publishers also helped to mitigate the risk by publishing stories with large crossover events that linked poorer-selling titles to better-selling ones and by writing stories that spawned interest in back-issue purchases.  The consumer enjoyed the advantage of being able to sample the book before purchase, but at the cost of paying the markup.

This approach put a premium on distribution.  After some settling-in time, there emerged three companies that handled the majority of comic book distribution:  Capital City, Diamond, and Heroes World.  Riding high on the comics boom of the early 90s, Marvel Comics bought Heroes World Distribution in 1995 to exclusively distribute their product.  Diamond responded with exclusive deals with Darkhorse, Image, and Archie Comics and Marvel’s main rival DC Comics.  Left out of the main action, Capital City soon after sold out to Diamond.  An overextended Marvel then filed for bankruptcy protection in 1996, effectively ending Heroes World Distribution in 1997 and leaving Diamond as the only game in town.  Comic Tropes’s video How Distribution has Saved and is Now Killing Comics gives a comprehensive summary of this history, including some details about the anti-trust litigation not discussed here.

It’s hard to tell what impact this monopoly has had on comics over the years.  Sales figures of US comics to the direct market show a growth from about $15 M to $27 M in the twenty years since (taken from Comichron’s sales summaries), which amounts to a about 15% growth, after inflation.  Spending on general entertainment was closer to 30% over a similar timespan, but comics has to contend with the emergence of a host of substitutes including manga, anime, and videogames.  Comic book publishers resorted to an ever-increasing array of gimmicks to try to lure more customers in (rebooting of a series or a whole universe, tie-ins with movies, changing the lead character, etc.), but often stores were stuck with hundreds of back-issues with no way to unload them.  Susana Polo’s video, The BEST WAY To Buy Comics!, presents some of the ways in which the direct market distorts comic book creativity.

This delicate balance continued for over 20 years until recently, when DC Comics broke from the fold, and the interesting question is, why?

According to Peter David, a long-time comic book creator who got his start in sales at Marvel, this move is DC’s way of declaring war on its long-time rival.

One can certainly understand David’s interpretation given the fact that DC makes up about 30% of the total US comics market share ($8-9 M versus $11-12 M for Marvel and $28-32 M overall in 2017), but before concluding he’s correct let’s explore the economics of the situation a bit deeper.

First, the basic facts about the new arrangement.  DC is going with three distributors instead of one:  UCS Comic Distributors will handle the east coast, Lunar Distribution will handle stores in the west, and Penguin books will provide graphic novels and collections (not monthly comics) to US bookstores.  In the Bleeding Cool article Stagnant DC Sales, Diamond Plans and What Happens Next – The Gossip, author Rich Johnson points out that, while Diamond has a transportation hub (Diamond UK) in England that enables them to service the European market with US comics in a cost-effective way, neither UCS nor Lunar do.  One of the UK stores mentioned in the piece says that they now have to purchase comics at above cover price due to the high cost of shipping, and Johnson also notes that, even though the UK makes up 10-15% of Diamonds overall distribution, DC owns the lion share of that slice.

Second, some feedback from DC is in order.   In his article DC Comics Admits Comics Have “Sustained Stagnant Growth” In Decision To Cut Ties With Diamond Comics Distributors in Bounding Into Comics, John F. Trent covers the email that AT&T-owned DC Comics sent to retailers on June 5, 2020, in which they admitted that “sustained stagnant market growth” figured into their decision to part with Diamond Comics Distributors.  The article cites an interview with Mark Gallo, the owner of Past Present Future Comics.  Gallo’s said that

[t]his new entity won’t have any incentive to provide terms in my opinion. I personally have 28 day terms with Diamond and I’m assuming I’ll be cash on delivery with this new distributor. This looks really bleak.

Gallo also added that they're

[b]laming a distributor instead of their unsellable woke trash product. Total deflection. Comics fans want good stories and art not a laundry list of woke writers interjecting their politics into character development and storylines.

Johnson believes the gossip on the street supports Gallo’s view that the product itself bears blame.   Johnson notes that Pamela Lifford, President of Warner Bros. Consumer Products, has no love for DC Comics and that she views them as costing too much to make (production, labor, and time) and would rather have DC focus on their graphics novel line and bookstore market.

We are now in position to better understand the possible reasons DC split from Diamond and to judge Peter David’s assertion that this is a declaration of war by DC on Marvel.

  • Suppose, for the sake of argument, that DC does drive Diamond out of business. Wouldn’t either UCS or Lunar decide to take on the displaced companies (Marvel, Image, Dark Horse, IDW, Boom, and so on)?  They would be fools not to grow their business regardless of how disgruntled DC might be.  Also Marvel Comics might be logically viewed as the development arm for the movie juggernaut that is the MCU so it is unlikely that it would be allowed to go under.  We conclude that it is highly unlikely that this move represents war on Marvel.
  • Perhaps then it is a war on Diamond. This interpretation also seems thin as DC is directly cutting off its nose to spite its face with regards to its UK market.  If DC’s comics are going to increase in price overseas, there will be a downward push on demand, and Marvel and the others may fill the void.  Also, unless Lunar and UCS offer even lower prices, there is a distinct possibility that shops will lower the demand on DC as well.  Given that the company has not managed to grow their comics revenue, and their attempts at a movie universe to compete with the MCU have been far from successful, it is distinctly possible that Diamond will be able to reestablish the lost DC revenue as other publishers increase their market share.
  • So, it seems that the only likely conclusion is the one hinted at in Johnson’s article: DC executives want to lower cost in the unit and they don’t care if the market share goes with it.  Since sales are stagnant the only ways to increase profitability are to either cut product or cut operating costs.  In going with their new distributors DC must be achieving a major savings in order to justify the risks involved.  If the line of DC Comics fails, they can always say they tried, and then they can live for years off of the graphic novels and collected stories while they rebuild a better and cheaper workforce.

No matter what interpretation one finds fits the facts best, there is no doubt it will be an interesting ride.

If Eastman Were Alive Now

Sometime back I wrote an article arguing how Kodak lost focus on its vision and went from one of the most significant companies in the world to a small provider of a niche product (How Kodak Went So Wrong – January 23, 2015).   The basic premise of that argument is that if Kodak had kept true to George Eastman’s original motivation for producing photographic film – namely that he wanted to enable people to ‘capture memories’ as easily as possible and film was the means to that end during his lifetime – then Kodak would have transitioned to digital and remained a world business leader.  The movement from film to CCD-based digital ‘memory capturing’ in the 1980s would have been as logical a progression as the transition from plate-photography to film had been in the 1880s.

From a purely economic perspective, the current COVID-19 pandemic is providing valuable insight into how companies are positioned to either rise or fall based on their business savvy and agility in adapting to these dynamic and unpredictable events as they unfold.  Economists and business analysts will be able to write papers for decades to come examining each and every sector of the economy.

That said, this post will engage in a little alternative history and counter-factual conjectures in asking what would the modern landscape look like if Kodak had been able to hold on to the business sense and entrepreneurial spirit that George Eastman had in such abundance.  To this we are going to make an admittedly radical assumption in imagining that Kodak’s chemistry wing managed to produce a fountain-of-youth serum that only worked on its founder.

Published by B. C. Forbes Publishing Company, New York, 1917 - https://archive.org/details/menwhoaremakinga00forb, Public Domain, https://commons.wikimedia.org/w/index.php?curid=19172293

Our alternative-universe story begins in 1929 when George Eastman was 75 years old.  While tinkering on improved roll film, Kodak’s chemical division accidentally creates a noxious compound whose merest whiff causes violent headaches, nausea, and vomiting.  Trying to isolate what happened, the lead chemists realize that they can’t quite duplicate the experiment but that what they’ve created is a highly volatile and potent poison.

Learning about the lab accident, our alternative Eastman, who is already beginning to suffer from the spine ailment that would drive the real Eastman to suicide in 1932, decides that, if he is to take his own life, he would prefer to do so with this one-of-a-kind toxin that his company has produced.  Sneaking into the lab late at night, Eastman quaffs the poison and collapses, thinking, as he loses consciousness, that the end is nigh.

Imagine his surprise when he awakens hours later, his spinal pain completely gone and his age regressed until he looked and felt as he did in his mid-thirties, around the age when he had developed the Kodak Black camera.  Restored to his prime with vigor to spare, he resumes his role of steering on one of the largest companies of his time.

Now filled with inexhaustible youthful vigor, he tackles the new technology of that era: quantum mechanics.  His long-standing interest in chemistry is supplanted by this new science that underlies it.  It’s a slow go but just as he is starting to master the subject conceptually, World War II breaks out.

Under his leadership, Kodak supplies aerial photographic support to the Allies Intelligence apparatus.  After the war, Eastman has the company build upon the technical innovations it produced during the war and the good will that came with serving the country during its time of need, further positioning Kodak as a go-to company that makes life better.

Now believing that an even better experience awaits his consumer base, Eastman has Kodak develop a research branch focusing on the science of optics and electronics.  He backs a partnership with Bell Labs and directs his technical staff to stay abreast of developments in the field.

The critical juncture takes place about 25 years after the end of the war.  Like in our own timeline, the late 1960s find Boyle and Smith making the first charge coupled device at Bell Labs, followed shortly after by Steven Sasson, a Kodak employee, developing and patenting the first CCD-based camera in 1975.  However, unlike our own timeline, with Eastman at the helm, Kodak quickly jumps on commercialization and begins to gather market share with its digital photography.

In 1984, the alternative timeline Kodak eagerly agrees to the official film of the Los Angeles Olympics.  This move allowed Kodak to keep rival Fujifilm at bay while also enabling the corporate giant to again use favorable public sentiment to its advantage in promoting its new digital photography offerings.

The time of crisis now passed, Kodak then steam rolls into the modern era.  Eastman’s vision of putting ‘the what’ (capturing memories) before ‘the how’ (photographic film) allows Kodak to nimbly respond to the ever more rapidly changing market.

By the late 1980s, Kodak has partnered its CCD-based technology with Sony to make a consumer camcorder second to none.  By the 1990s, recognizing how the internet would allow a person to share the memories he had captured with Kodak cameras, Eastman guides the company to invest heavily in the internet.  Kodak develops, patents, and licenses streaming technology years ahead of what was developed in our own timeline.  By the mid 2000s, Kodak, now recognizing the move towards miniaturization and consolidation desired in the consumer telecom industry, beats Apple to the invention of the smartphone.  Finally, capitalizing on the growth of broadband internet and increasing speeds, this alternative Kodak corners the market on teleconferencing and collaborative applications like Zoom, Webex, or Adobe Connect.

When the COVID-19 crisis hits this alternative timeline, Kodak, already a household word, is able to further cement its reputation in the eyes of the consumer as the company that helps make, capture, and share memories with each other while staying safe.

While it is true that the foregoing is a work of hypothetical fiction with no way to either prove or disprove its veracity, it is also certainly true that at least some of the events narrated would have actually been within the Kodak’s grasp had they simply kept true to the vision of George Eastman.

The Good, The Bad, and the Corona

Well it is entirely obvious by now that life in the USA has changed due to the corona virus’s clutch on the world as a whole.  In these seemingly desperate times, as in similar crises, there is always a bit good mixed in with the bad and some other things worth commenting on as well.  Let’s start with some aspect of the good.

The scene is 5:30 am on a Tuesday morning.  I usually get up this early but ordinarily I stumble into my home office and look for research ideas or inspiration for a new blog.  This day I did nothing of the kind.  Shuffling off to the bathroom, I ran a comb through my hair, freshened my face, and changed from pajamas to street clothes.  Slipping out of the bathroom I went downstairs, fetched my fob, and at 5:45 am left my house.  My destination, the local supermarket, lay some ten minutes away.  When I arrived I queued up behind the dozen or so people there before me, each keeping a 6-foot  buffer between himself and his neighbors to the front and rear.  A little after 6 am, the store opened and we all somberly entered in single file.  Most, if not all, of us went straight down the paper products aisle looking for that one commodity that is to our modern situation what gasoline was to Mad Max - toilet paper.  It was eerie and surreal to walk through an area of the store that until 2 months ago held an abundance of products to find just under a hundred packs of rolls that were mostly scooped up by myself and my fellow early-morning shoppers.

There are many  good aspects of this sorry situation but I’ll only comment on three.  

The first is that, despite the stay-at-home orders and the general shuttering of the economy, the American can do spirit has not entirely withered.  There are still manufacturing activities going on in the country.  The supply chains may be clogged but are not stopped and we still enjoy such a high standard of living that was entirely inconceivable a century ago.   

The second is contextual and may not come home to everyone, even though it should.  What we are experiencing with these various shortages is a small foretaste of what socialism would be like if we embraced it.  Long lines, empty shelves, and desperation are always the earmarks of socialism and communism.  No country on Earth, even the so-called socialist scandanavian nations, can have a vibrant economy under socialism.  Denmark and Sweden (and probably the others in the fever dreams of politicians who believe in a Nordic utopia) have clearly rejected the label of socialism and pointed to their free-market practices.  And well they should, because free-market practices are what fill shelves with toilet paper, sugar, napkins, ground beef and so on.  And, touching on my first point above, we can see experientially just what happens when the market is not free and, hopefully, this will be the worst we’ll ever see.

The third is far more prosaic dealing with substitution as a by-product rule.  Economists like to point out that when supply is low and demand is high and prices rise, consumers will substitute similar alternatives for the good they usually purchased.  For example, people might switch to ground turkey if beef prices sharply increase.  I think economists should have a field day with papers galore based on what I have observed.  Everywhere I went in the supermarket, there were shelves totally missing contents next to shelves brimming with products very few wanted.  I know that I have tried new items that I ordinarily wouldn’t have purchased but it seemed that even in crisis, choosy mothers were finicky about what foods they were allowing in.  It would be fascinating to see a breakdown of what threatened people still wouldn’t touch and if the buyers of the various chains change how they purchase based on these observations.

On the bad front, I’ll focus only on one thing but a really bad one.  The nation’s governors, mayors, and elites seem to have let, in far too many instances, power go to their heads.  The textbook example is probably found in Michigan where the following table compares the do’s and dont’s, courtesy of governor Megan Whitmer,  

Do Don’t
Purchase liquor, lottery tickets, and marijuana Purchase seed, paint, and rugs
Go boating with a canoe, rowboat, or kayak Go boating with a power boat or jet ski
Get an abortion Get a biopsy or joint replacement

Louisville, KY Mayor Greg Fischer comes in a close second when he ordered churches to cease ‘drive-in’ services where each car was at least 6 feet from neighboring ones but wouldn’t ban drive-through food pickup, where the distances between strangers was much closer and number of direct-interactions much higher.  I challenge anyone to find the logical rhyme-and-reason of these allowances and prohibitions.  The table listings smack of lobbyist influence and crony-capitalism.  Milton Friedman certainly seems vindicated in his belief that big government exists to grant favors.  In addition, all sense of cost-benefit analysis and awareness of hidden costs seems to have gone out the window in shuttering the national economy.

Sure COVID-19 seemed like the super-flu ‘prophesied’ in Stephen King’s The Stand back at the beginning of March but now the emerging evidence seems to indicate that the communicability of the disease is much higher and the lethality a lot lower.  Still cries persist that even one life lost is too many.  What utter nonsense.  Below is a table adapted and supplemented from CDC data indicating how people died in 2017.

Cause of Death Number of Deaths
Heart Disease 647,457
Cancer 599,108
Accidents 169,936 (including 37,133 traffic deaths)
Chronic Lower Respiratory Diseases 160,201
Stroke 146,383
Diabetes 83,564
Influenza and pneumonia 55,672
Nephritis, Nephrotic Syndrome, and Neprhosis 50,633
Suicide 47,173
COVID-19 (as of 4/24/20) 44,973

 

I get that social distance impeded the immediate spread (although the Chinese Communists could have nipped it in the bud if they hadn’t lied) but let’s get people back to work.  We don’t shutter the economy because over 600,000 people die of heart disease, no doubt aggravated by working in close proximity to other people.  The unseen cost of keeping the economy moribund will cause more addictions and more suicides for years to come.  

I’m not the only one advocating for a measured approach to the risk imposed by COVID-19.  Heather Mac Donald, in her article The Deadly Costs of Extended Shutdown Orders, argues quire convincingly that focusing on saving “just one life” effectively does more harm than good and that our governing elite are using anything but the science of risk analysis to make policy.

I’ll end on an ugly note, since the blog title suggests a more than passing similarity with a famous western.  The behavior of my fellow man can be very ugly, despite certain philosophers claiming that tragedy and crisis bring out the best in people as it shakes them from their complacency.  The scarcity of toilet paper could be understandable as a supply-side problem if I didn’t see a neighbor 3 streets away try to scurry into her home in the early hours last week.  With two 20-packs of toilet paper under each arm and another 20 pack in the trunk one has to wonder if she eats it or has she simply given into panic and fear and is hoarding.  Let’s just say that my answer to that question doesn’t favor toilet paper as any part of the food pyramid. 

 

Economics and Ergodicity

This month, I came across a very interesting article about a proposed resolution to what the author regards as a long standing problem in economics.  The basic point of the paper, which is entitled The ergodicity problem in economics by Ole Peters (Nature Physics, Vol. 15, December 2019) is that classical economic analysis is fundamentally flawed.  According to Peters, the fatal mistake made for hundreds of years is the ergodic assumption that equates the time average of an economic process (say investing) by an individual to the average of the same process across an entire population, at a given time.  Determining whether this assumption holds is extremely important if economists want to be able to model what the average person will do.

Ergodicity is a concept originating in the branch of physics known as statistical mechanics.  Statistical mechanics seeks to characterize physical systems that possess vast numbers of moving parts in terms of a vastly smaller set of parameters.  Evolution of a complex system is generally described in terms of how the averages and standard deviations associated with all these parts change in time.  By assuming that the system is ergodic, the physicist can state how a system will evolve in time simply by looking at the average over multiple copies of the system at an instant in time.

An example will help make some of these ideas more concrete.  A typical ‘simple’ physical system with a vast number of moving parts is a bottle of water.  Describing this bottle of water at the supermarket is absurdly simple, one merely specifies the amount of fluid (250 ml, 500 ml, etc.) and the temperature.  If one wanted to be fancy, one could even specify the percentages of trace elements bringing the number of parameters, say, up to 100.  Despite the fact that 100 is a relatively large number of things to track, it’s still vastly smaller than the number of parameters needed to describe the bottle at a molecular level.  In a 500 ml bottle, there are approximately 1.86 x 1025 water molecules or about 9.3 trillion trillion molecules for each dollar of federal debt and each requires, at a minimum, 7 numbers to describe its motion.

Once the bottle is bought and brought home, it will have its own local history.  It may be placed in the refrigerator or left in a hot car; it may be opened and partially or totally drained or kept shut for a later consumption; and so on.  Ergodicity assumes that each of the bottle’s observed states, as it evolves in time, can be matched with a single bottle in a large population of differently prepared bottles at a given time.  An unopened 500 ml bottle that warms from 5 to 20 C can be thought of as first visiting the state of an identically-sized bottle that is held at 5 C, then a different 500 ml bottle held at 5.5 C, then yet another bottle of the same size held at 6 C, and so on.   In this way the time average of the single bottle’s temperature can be derived from an average over a population or ensemble of bottles each kept constant at its own particular temperature. Alternatively, the large population’s statistics may be derived by taking a time average of a single member.  Which direction (time-to-ensemble or ensemble-to-time) depends on the physical system and the experiments being performed.

The ergodicity assumption has been quite successful in thermodynamics but Peters contention is that the types of dynamical systems found in an economy do not share this feature with the dynamical systems found in nature.  To support this claim, he offers a simple gambling model that will be explored in the rest of this column.

In Peters’s model, a person can participate in a repeated wager where 50% of the time he may increase his wealth by half and the other 50% of the time he will lose 40% of all that he has.  According to Peters, classic economics would predict that the potential gambler would jump at this chance.  The gambler’s enthusiasm derives from his analysis, using classical concepts from economics, the fact that the expectation value for this gamble (average gain or loss, denoted by E(gamble)) would result in a 5% gain since

E(gamble) = Prob(win) Payoff(win) + Prob(loss) Payoff(loss)

                  = 0.5 ( 0.5 - 0.4 ) = 0.05

where the notation Prob(win/loss) = probability of winning or losing (0.5 for both), Payoff(win/loss) = the outcome of a win or a loss (0.5 or -0.4 for a win or loss, respectively).

Peters points out that no rational person would actually agree to this gamble and thus the disconnect, he argues, between classical economic predictions and observed participation in the economy.  This is where ergodicity comes in.  Basically, the average person understands intuitively that this gamble, despite its constant positive expectation value as a function of time it is not ergodic.  That is to say that the time average of a gambler’s wealth, assuming he repeatedly plays, doesn’t result in a roughly constant 5% increase but rather it leads to ruin.

The article presents a rather disturbing graph in which the wager is simulated as a random process for 50 members of the economy who participate in repeated goes at the same gamble.   My own reproduction of this process using 150 members is shown below.

Each of the grey lines represents the time evolution of the relative wealth of a single gambler who repeatedly engages in the Peters wager.  The black line is the average over all the gambles - the time evolution of the ensemble average.  If ergodicity held, then this black line would equal the red line.

The bulk of Peters’ article is a sophisticated analysis why ergodicity fails to hold and under what conditions.  It is a difficult read but likely very important in revising economic theory.  But, regardless of how important the technical details that emerge may be, an even more important point will be understanding how the human participant, lacking all of this specialized expertise, understands to stay away from this type of gamble.  Insights into this last point are likely to be more profound than the underlying mathematics.

Ants, Grasshoppers, and Student Debt

Ordinarily this column stays away from politics with a capital ‘P’. While economics necessarily borders on politics, defined as the basic interaction between people, this column tries hard not to call individual Politicians nor to take a partisan position. That said, the recent interaction between Elizabeth Warren and an unnamed Iowa father is worth discussing in that it brings into sharp focus the old point of Frederick Bastiat about what is seen and unseen in the economy.

The interaction is shown in the following YouTube clip (first minute is sufficient).

The dialog is short and worth repeating in print here.

Iowa Father: I just wanted to ask one question. My daughter is getting out of school. I’ve saved all my money. She doesn’t have any student loan. Am I going to get my money back?

Warren: Of course not.

Iowa Father: So, you’re going to pay for people who didn’t save any money and those of us who did the right thing get screwed.

Warren: No, you’re not getting screwed.

Iowa Father: Of course we did. My buddy had fun, bought a car, went on vacations. I saved my money. He made more than I did. But I worked a double shift, worked extra - my daughter worked since she was 10.

The interaction got hotter after that but the main point of the Iowa Father made is correct. This observation certainly flies in the face of those who look only at the seen cost of student debt.

There are clear problems with young people holding student debt. In many cases students have been sold a bill of goods that a college education is simply the only way to get ahead and make good money. Plumbers, mechanics, and HVAC technicians all over the country would no doubt laugh at the idea that you need a college education to make money. In addition, graduates across the land are probably lamenting the fact that $50,000 of student debt and a degree have not led to the promised land.

This problem pairing of a bad degree with burdensome student debt is only exacerbated by the fact that students can even get out from under by declaring bankruptcy, which would be a fine remedy since the students would simply bear a different cost.

But the sympathy that we all may have for the plight of the bad-degree/burdened-by-debt individuals (a seen cost) should not blind us to the unseen cost that that Iowa Father pointed out.

The Iowa Father acted like the proverbial ant in Aesop’s parable of the ant and the grasshopper. In the original parable, the ant saved food during the summer against the lean days of winter. In contrast, the grasshopper frolicked and danced, putting nothing aside for any dark days. When winter descends, one finds the grasshopper begging the ant for food and the ant refusing the grasshopper to share in the fruit of his labor.

The Iowa Father did the right thing. He scrimped and saved. He forewent fun, new cars, and vacations. He delayed his gratification in order to secure his daughter a college education unencumbered by student debt. His buddy was like the grasshopper. He spent and he idled and his kids now have to live with student debt.

But just like some people want to rework the original parable to point out the lack of compassion of the ant (as if the ant were a villain and not a sober individual who recognized the need for thrift), some people only want to focus on how bad it is to have student debt. Suppose we, as a society, chose to forgive that debt. Any compassion one might be showing to the student and his family is far outweighed by the callousness and lack of compassion we are showing to the Iowa Father.

He is bearing two unseen costs and we are effectively telling him that his work, efforts, and his virtues are all reasons he should be enslaved by the state.

The first unseen cost is opportunity cost he bore while saving for his daughter’s college education. Think of all the merry times he not only missed but will never be able to experience again. There must have been a trip, perhaps to Disney World, with his teenage daughter that he passed over in order to put money aside. Probably, he can afford the trip it now but his daughter is much older and the moment is lost. He also was denied the fond memories of many family outings or the enjoyment of a new car, all of which he sacrificed so that she graduated without debt.

The second unseen cost is now the increase in taxes and cost of living that this fellow must endure so that the creditors can have their loan returns met while the profligate takers of those loans skate free. This step adds insult to injury making him not only pay for his own sacrifices but also making him sacrifice for the gains the other family enjoyed.

By ignoring the costs born by the Iowa Father, what we are effectively saying is that there is a certain class of people in society who deserves to be oppressed and exploited and that this class is exactly the group that should be rewarded and held up as an example. We’ve turned the world upside down because forgiving student debt not only victimizes the “ants” amongst us it incentivizes the “grasshoppers” to be even more prodigal.

In short, the Iowa Father was correct - he is getting screwed.

Black Friday

Well another Black Friday is upon us.  Once upon a time, when I was younger, I actually viewed Black Friday as a special day to go out and enjoy the hustle and bustle of shopping, to see the Christmas decorations festooning every store, and to buy gifts for the loved ones in my family.  As I got older and became a bit more cranky, the lust and obsession exhibited by certain people began to weigh me down and cause me to think that Black Friday was to be avoided at any cost.  As I’ve gotten even older and more educated about economics, I’ve come back around to liking it but for different reasons.

When viewed objectively, Black Friday is quite an economic miracle.  Starting, some time after Thanksgiving (the exact time seems to change every year), millions of Americans make hundreds of millions if not billions of economic choices in just one day.  Stores have to plan and prepare for this bacchanalia of bargain hunting by answering a host of questions.  These include:

  • which items should be stocked,
  • how many of them should be ordered,
  • at what price should they be sold,
  • how much should be sent on advertising,
  • and so on.

What is truly remarkable is that no central planner set this organized insanity up.  No elite intelligence manages all of the variables for each and every institution.  Rather the invisible hand of capitalism operates on a massive scale.  Every step from discovering and processing raw materials, to designing a product that people want, to the manufacture, shipping, distribution, and retailing of the good, is done by an intricate, complex web of self-interested decision making.

It is in this way that Black Friday really is a descendent of those first lessons from Thanksgiving: the abandonment of the ultimately destructive rot of forcing shared work and outcomes and the embrace of achieving the satisfaction of earning your position by hard work.  Stated simply, the creation and participation in a free (emphasis on free) market.

So, it was with great disappointment that I read the Thanksgiving section (Section 3) of James W. Loewen’s book Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong.

Loewen starts his section with a variety of quotes, obviously intended to set the overall tone about the myth that citizens of the United States entertain about the significance of the Thanksgiving.  The three most provocative quotes are:

Considering that virtually none of the standard fare surrounding Thanksgiving contains an ounce of authenticity, historical accuracy, or cross-cultural perception, why is it so apparently ingrained?  Is it necessary to the American psyche to perpetually exploit and debase its victims in order to justify its history?

- Michael Dorris
European explorers and invaders discovered an inhabited land.  Had it been pristine wilderness then, it would possibly be so still for neither the technology not the social organization of Europe in the 16th and 17th centuries had the capacity to maintain, of its own resources, outpost colonies thousands of miles from home.

- Francis Jennings

and

The Europeans were able to conquer America not because of their military genius, or their religious motivation, or their ambition, or their greed, they conquered it by waging unpremeditated biological warfare.

- Howard Simpson

Sigh…, where to begin with the material fallacies that abound in each of these arguments.  To start with a general observation that each of these quotations address points that have nothing to do with Thanksgiving’s root but rather what each commentator perceives as a modern corruption.  It is okay to criticize the modern corruption but it in the spirit of charitable argumentation, each of them should have discussed, at least in passing, the original reason for celebrating Thanksgiving.

Now on to the individual quotes.

Michael Dorris’s quote is distinctly sloppy in failing to define ‘standard fare’.  What exactly does he mean?  Perhaps the Macy’s day parade or the Black Friday advertisements or even something he saw on TV.  How hard would it have been to say something like ‘… none of the standard fare, which maintains…’?  And in what way is celebrating the core fact that the Pilgrims eschewed socialism have anything to do with ‘exploiting and debasing America’s victims’?

Francis Jennings’s quote misses the point of the proper roots of Thanksgiving.  Yes, on the surface, everything Jennings says is true; Europe could not maintain an outpost colony in the new world.  William Bradford said as much in his writings.  He bemoans the fact the colony has to stand on its own two feet while trying to live under the ‘socialist requirement’ levied by the Company of Merchant Adventures of London, who backed the enterprise.  He recognizes the weakness of the arrangement when commenting on the colony’s lack of success in 1622 and early 1623.  Finally, Bradford directed the colony to abandon the communal property arrangement in favor of individual rights and obligation.  Bradford goes on to say the new arrangement ‘had very good success, for it made all hand very industrious’.  His analysis of what went wrong under the communal arrangement was that it was ‘found to breed much confusion and discontent’.  It was the abolishment of this terrible communal arrangement and the success adopting individual rights that is the real story of Thanksgiving, a story that Jennings’s quote (and all the others) ignore.

Simpson’s quote is the most egregious of the lot.  The Pilgrims were neither militaristic nor were they particularly religiously motivated to conquer a new land (they came to the America’s because they had to escape the religious persecution they faced in Europe).  And, as Bradford’s narrative attests, they originally had no ambition and no greed under the communal arrangement.  All of these points may apply to other colonies at other times, but they are mismatched with Thanksgiving as the subject.  Still, I may have been able to overlook these flaws but for the last sentence.  It defies common sense to believe his last assertion; none of the European settler’s would have been happy to bring disease to the New World; it would counter-productive since they would have to worry that the disease would turn around and attack them.  This claim is particularly baseless considering the devastation that Europe bore after the Black Plague ravaged the land.  Judging the Pilgrims through a lens of modern biology is simply ridiculous given that the germ theory of disease was a discovery of the late 1850s, centuries after the colonization began and at least 80 years after the ratification of the US constitution.

There isn’t much to recommend the rest of the section as well.  Loewen engages in a variety of material fallacies of his own, including a equivocal use of the term ‘settler’, an ad hominem attack on WASPs, and an over-emphasis on the diseases that tragically ravaged the Native American population. But, perhaps, the most tragic thing about Loewen’s presentation is his failure to recognize and celebrate the triumph of individual rights over collectivism.

Monopolies Part 3 - Monopolistic Pros and Cons

The last two columns examined the impacts of having monopolies within the economy.  They established that, despite popular opinions and accepted common knowledge that a monopoly can control everything within its sphere of activity, the reality is that a monopoly (or an oligopoly) is under immense pressures that narrowly limit its behavior.  The structure and extent of these limits are best understood by analyzing marginal cost and revenue curves within the context of the supply-and-demand curves (see the previous two posts).  These very forces limit the production of a monopoly to levels below the societally optimal value, which is the real complaint that society at large should have against the monopoly.  A textbook case that demonstrates that market forces rule regardless of a company’s size is the tragic and disastrous MCAS system that downed two Boeing 737 MAX 8 and that has left the company’s reputation in tatters and its future uncertain.

The resulting economic conclusion, based on sound logic and observed outcomes within real business sector, is that monopolies do damage to consumers by keeping the supply lower than desired, not because of malice on its part, but because it has no choice (or rather it has no profit-optimal choice, which amounts to the same thing).

The natural follow-on inference is that it is in society’s best interest to eliminate monopolies; and, for many cases, this is true.  However, there are times when a monopoly is societally beneficial if not outright necessary.

The prototype example of this ‘exception’ are those industries that deliver services that require wide-spread standardization.  The most obvious examples are utilities that deliver gas, electricity, water, and telecommunications to a community.

Imagine the chaos that would ensue if there were more than one electric company in your town.  Each company, say Exciting Electric and Pinnacle Power, would have to construct its own delivery system (its own wires) to send electric power to the consumer.  What a waste of resources: duplicate sets of power lines, consuming more land, and so on.

An additional concern is that each company would try to have a unique standard (say Exciting Electricity would use 60 Hz and Pinnacle Power would use 50 Hz) as a way of locking the consumer into their service.  Based on these conclusions, local communities established utilities as essentially publicly owned trusts with a suite of regulations covering every aspect of the enterprise.

At least that’s how the conventional wisdom goes.  And there is some truth in it, certainly in the past where the electric company owned both the power plant and the delivery system.  But I think there are definite places for improvement.

To understand the possibility for improvement, turn to a utility that was deregulated after decades as a monopoly – telecommunications.  The telephone infrastructure was essentially a regulated utility for decades.  During this time, there was little innovation particularly where the consumer phone was involved.  Since the phone company owned the phones in the consumer home, choices were limited to the standard model or the princess phone, available in a dazzling array of something like three colors: white, black, and beige.  There might have been a red version but who cares, the point is that there wasn’t much choice nor was there any incentive to listen to customers.  As a utility, the phone company could charge the consumer with a certain amount of impunity and provide services below what a competitive market would.

Many changes happened after ‘Ma Bell’ was broken up in the 1980’s.  Suddenly, there was a freer market and an incentive to innovate.  However, the real change came with the invention of the cell phone.  Here was about as free of a market as could be imagined.  Different cell service providers sprang up, each providing access on the common, shared delivery system that is the electromagnetic spectrum; each offering competitive pricing, better service, and an increasing pace of innovation.  The market started with ‘brick’ phones, evolved to more compact and slim designs, which then evolved to flip phones, and finally to the smart phones most of us enjoy today.

None of this innovation would have happened under the old system and the competition has lead to a much better experience for the consumer.  Of course, none of the providers are perfect and there are times when the consumer has had enough with his particular provider and moves elsewhere, but that is just what a free market promises, a mechanism for improvement not a perfect finished system.

With these observations in hand, let’s return to the question of electric power generation and delivery.  In The Complete Idiots Guide to Economics, Tom Gorman mentions in passing that deregulation has had mixed results.  To quote:

Over the past [25] years or so, The United States has broken up several monopolies and introduced market forces into some formerly regulated industries, such as telephone service, power generation, and air travel.  Results have been mixed.  In the telephone business, greater innovation and lower prices for service have resulted.   Lower prices have also resulted in air travel, but extremely high costs may render the industry ill equipped to function in a truly competitive environment.  The jury is till out on power generation, but early signs in from California are not promising.

While Gorman’s analysis of telecommunications is spot on and his warnings about air travel seem to be reflected in the recent Boeing disaster one can’t help but wonder why he is so pessimistic about electric power generation.  The probable answer is the manipulation of the energy market by Enron (the ‘burn baby burn’ scandal) but this situation was hardly the free market gone bad.  There is ample evidence that government and industry were in cahoots resulting in “secret deals with power producers, traders deliberately drove up prices by ordering power plants shut down” and that it was deregulation-in-name-only replete with many flaws.

In the case of power generation, many markets have moved or can move to having a common delivery infrastructure structure with power generation being separately owned by different companies that compete for their market share.

And at least some reports show that power generation deregulation works and can save the consumer up to 30%.  So, the lesson is that seems that deregulation will work if some imagination and ingenuity is used to harness market forces, while preventing government and/or business placing thumbs on the scale, and that society should be actively working to eliminate or minimize the presence of even ‘blessed’ monopolies in the economy.

Monopolies Part 2 - The Real Harm of Monopolies

Last month’s post dealt with the disastrous rollout of the Boeing 737 MAX 8 aircraft.  At the heart of the problem were a redesign of the basic jet propulsion system, flaws in the MCAS automation system, and a cutting of corners where safety was concerned, all of which resulted in numerous close calls and two crashes on takeoff that ended the lives of hundreds of people.  It was argued in that post that the reason Boeing rushed the MAX 8 to market was the pressure they were experiencing from Airbus, the other member of the current commercial aircraft duopoly.

The conclusion of the preceding narrative, that Boeing’s presence in a duopoly made it vulnerable to market forces, may seem foreign and counterintuitive to anyone raised on the usual stream of fantasy stories about powerful businesses.  Contrary to the laws of economics, monopolies/oligopolies are usually portrayed in movies as being nigh-omnipotent villains that require extraordinary heroism to overcome. Films, such as Rollerball, The Running Man, The Hunger Games, and Repo! The Genetic Opera reinforce the idea that a monopoly or oligopoly sit above the usual laws of scarcity that govern the rest of our lives.

In reality, monopolistic and oligopolistic firms are subject by the same economic forces as the rest of us.  The outcomes and particulars are, of course, different because of the firm’s position within the economy, but the idea that a monopolistic firm runs unchecked and roughshod over society, insinuating its tendrils into every nook and cranny of life are fantasies that serve as fodder on for the storyteller or the polemicist.

Surprisingly, the real societal ill that monopolies (for simplicity this post will only look, hereafter, at monopolies) represent is not their unmatched influence on society as much as their disengagement from society.  The simple way to understand this seemingly up-ended position is to recognize that without competition to spur a firm forward, complacency takes hold and the resulting output is below what society demands.

A simple example of this is the typical department of motor vehicles (DMV).  The DMV has an effective monopoly on the goods and services it provides but, as all of us who have had to endure a trip to renew a driver’s license can attest, the service is painfully slow, the processes Byzantine, and the outcomes uncertain.  Its monopoly arises because it produces a unique good (official driver’s licenses) and there are high barriers to entry into the marketplace (here the barrier is law).

To see how a monopoly can underproduce compared to the socially optimal value, one must make a trip through a few graphs about supply, demand, revenue, and cost to construct a monopoly graph.  The arguments here are inspired by a suite of educational YouTube videos, especially the Monopoly Graph Review and Practice-Micro 4.7 lecture by Jacob Clifford, who has a manic style that seems to have been honed by years of teaching high school students, with additional insights and examples taken from Principles of Economics: Economics and the Economy, Version 2.0 by Timothy Taylor.

The first ingredient is the typical graph of supply and demand curves showing the producer’s and consumer’s points-of-view in determining how much of a particular quantity to produce or consume as a function of price within a given market (say, for example, for televisions).  Where they cross determines the equilibrium number of items produced, Qeq, and the equilibrium price the market is willing to pay, Peq.

The shaded regions are perhaps less well known.  The blue one represents the total consumer surplus defined as the total amount those consumers, who would have paid more, saved by having the market drive the price down to its equilibrium value.  The green one represents the total producer surplus defined as the total profit those producers, who would have sold the item for less, earned by having the market drive the price up to its equilibrium value.  In combination, these additional savings or earnings represent additional resources that can be put to use in other areas of the economy.

The next step involves understanding how an individual firm producing items within a given market fits into the market as a whole.  Here the economist defines a spectrum of possibilities with perfect competition being on one extreme and monopoly on the other.

Within perfect competition, there are so many firms producing identical products (say oranges) that any individual firm is unable to cause the market as a whole to deviate far from Peq.  Such a firm is said to be a ‘price taker’ and it perceives its demand curve as being perfectly flat.  This means that no matter how many products it places on the market each one will sell at the equilibrium price.  In contrast, a monopolist perceives its demand curve to be identical to the demand within the market as a whole since it is the only provider.  A monopoly is often called a ‘price maker’ since it can set its price but this should not imply that the monopoly is all powerful. It is bound by two constraints.  First, it is unable to negotiate different prices for different customers (this is called price discrimination) since if it did, the customer receiving the lower price would then resale to the customer subjected to the higher price at an intermediate cost, thereby taking some of the profit away from the monopoly.  Second, the monopoly can’t force people to buy their goods and so they must face the downward sloping demand curve of the market as a whole.

This fundamental difference in the shape of the perceived demand curve is the key to understanding why monopolies produce below the societal demand of the good.  The only other ingredients are marginal cost and marginal revenue.

Both marginal cost and marginal return quantify the idiom ‘too much of a good thing’.  Marginal cost/revenue measures the change in the cost/revenue for an increase in production by one unit.  Functions as derivatives, the values of marginal cost and marginal revenue signal actions that a firm should take to optimize a variety of outcomes.

Based on the concepts of economies and diseconomies of scale, marginal cost will generally fall as production is scaled up from an initial size.  At some point, though, the improvements that come with size taper off and additional structure becomes self-defeating.  As a result, the expected shape for a marginal cost curve (MC) will be downward sloped for smaller quantities until it hit a minimum at which point it rises.  Related to the marginal cost is the average total cost curve (ATC), which, as the name suggests, is the total cost divided by the quantity produced.

The marginal revenue curve (MR) is generally downward sloping with no minimum, reflecting the fact that it does no good to cut costs to attract new customers if the price changed is at or below the cost to produce the good.  The zero of the MR reflects the breakeven point where the cost to produce a good equals the price it fetches.

The monopoly graph sports the perceived demand curve already discussed along with the MR, MC, and ATC curves.

There is a lot to mine from this graph but we will content ourselves with seeing how a monopoly introduces inefficiency.  The point where the MC and MR curves intersect determines the quantity (Q1) that the monopoly should produce to maximize profit, since this means that the rate at which the revenue is decreasing exactly balance the rate at which the cost is increasing.  The corresponding price (P1) is determined where a vertical line from Q1 intersects the demand curve (point B).  Taken together, the shaded green and tan areas represent the total revenue (Q1xP1).  The tan area has a height determined by where the vertical line from Q1 intersects the ATC curve and it represents the total cost incurred to produce this quantity.  The green area is then the profit.

Analogous to what was discussed above, the blue triangle is the consumer surplus that represents the money those willing to pay more than P1 saved.  The gray triangle is a new feature of the monopoly’s presence in the market. It is called deadweight loss and it represents the lost efficiency that obtains because the monopoly produces away from the equilibrium value (Q3).  Paraphrasing Taylor, this loss of social surplus occurs because the monopolistic profit-maximizing pricing is blocking some demanders from making transactions they would be willing to make.

So there, in a nutshell, is the real reason why monopolies are to examined carefully and possibly regulated.  Not because they are evil or world dominating or corrupt but because they are inefficient producers.

Next column will close out this discussion of monopolies by looking at the how monopolies form and some of the reasons why their production inefficiency might be tolerated.