Uncategorized

Economics of Elections

This month’s column, done in tandem with the article on elections done in the sister blog Aristotle2Digital, is about how voters choose how to spend their vote and how politicians choose to sell their services – in short, the economics of elections.  The companion piece, entitled Election Conundrums, explored a particular case study, lifted from American politics, that illustrated Arrow’s Impossibility Theorem that states that it is impossible  to find a voting system that guarantees a ‘fair’ outcome when three or more candidates are present.  This conundrum is the result of the mathematical ambiguities found statistically summarizing the results of an election in which voters split their vote amongst those running for office.  Being mathematical, that analysis did not try to determine how the voter population became split – it simply took the possibility and determined the implications.

This column remedies that deficiency by exploring some of the behavioral economics that lead to unusual electoral outcomes.  The ideas discussed here are strongly influenced by the excellent video Why Government Fails by Antony Davies of Duquesne University.

Davies opens by dividing the US population into three categories: voters, politicians, and bureaucrats.  He then goes on to speak about the myths that most people have about how government should work, myths based on naïve idealizations.  He proceeds to demolish these myths by simply modeling the behavior of each member of those three strata in terms that a public choice economist would use to describe them: each of them are human beings subject to limitations (economic scarcity) and the desire to maximize his individual utility. By taking these attributes into account and by properly identifying the specific type of utility each wishes to maximize, Davies shows that the rational choices each member would make leads to a set of governmental behaviors quite different than what our idealism thinks should result.

For example, consider the following ‘stupid’ law:  a member of the red population proposes a law in  which each member of the green population pays ten dollars to the government, half of the money is incinerated, and the remaining money is then evenly distributed to the red population.

Most of us believe that since the green population outnumbers the red by a ratio of 5 to 1, this law, which is clearly societally bad, would never pass.  But such analysis is naïve because it fails to account for the fact that voting entails a cost.  Most of us are conditioned to be appalled by the thought of poll tax but in fact each of us bears such a cost when we vote.  Even if money is not explicitly spent, there are costs associated with taking the time to learn about the issues, then to familiarize ourselves with the law, and finally to take the time and expend the effort to go to the polling place.

If we assign $20 as a monetary proxy for the cost (after all money is a proxy for time) then it in the rational best interest of each member of the green population to not bother getting involved but to simply bear the $10 cost as he will come out ahead.  In a sense, each member of the green population knows how to pick his battle and this isn’t it.  Davies says that the members of the green population are rationally ignorant of this kind of scenario where the cost is distributed and the benefit is concentrated.  A real life example of such a perverse situation where the majority bear a cost that is less annoying than the effort to end it is found in the tariffs impose on sugar imports to the US.

Davies also gives excellent examples of how the concept of representative government can fall flat and how bureaucrats have an incentive to transform their jobs into something that works for them rather than a calling in which they work for someone else.  But the scenario that is most interesting is the one involving politicians.

For simplicity, this analysis will be confined to two candidates, each vying for the largest number of votes between a polarized population made up of red and blue populations with very few members occupying the gray area of the middle ground.

 

Such a highly polarized voting population is quite familiar is these days of tribal politics and so one might think that the politicians elected into office are in fact hyperpartisan.  But there is, in fact a tendency for candidates to rush to the middle, even though most voters sit somewhere else on the political spectrum.

The process could work something like this.  Consider a population fiercely divided on welfare spending.

 

Candidate A favors a lower amount of governmental welfare support than Candidate B.  As a result Candidate A is favored only the population with the box.  Candidate B is considered better than Candidate A by the rest of the population even though neither the gray nor blue members find him palatable – he is simply the lesser of two evils.

Seeing that he won’t be able to win the election, Candidate A leapfrogs his opponent and moves towards the center.  Not to be outdone, Candidate B lurches in further toward the center until we end up with a situation as picture below where Candidates A and B are occupying middle ground that very few people actually find acceptable.

This scenario is so common that there is an accepted name for it: the median voter theorem.  The mechanics of this behavior well models the real world situations that we often see where candidates pander to their base when needing the nomination (red or blue) but then head promptly to the center in the general election.

With these sorts of economic forces dictating voter preferences and politician responses on a number of issues, it is quite easy to see just why Arrow’s Impossibility Theorem comes into play as often as it does.

Financial Arbitrage Redux

The previous blog introduced the notion of financial arbitrage and briefly explored the Capital Asset Pricing Model (CAPM) and the Arbitrage Pricing Theory (APT) models for pricing an asset (e.g. stock).  The CAPM correlates a particular asset with some macroeconomic factor (e.g. inflation or one of the indices) to determine the expected return on the arbitrage.  The APT generalizes this 1-dimensional correlation to the case where multiple factors affect the asset price.  The applicable formula that covers both cases is

RA = Rfree + β1 ( P1 - Rfree ) + β2 ( P2 - Rfree ) + ... = Rfree + β1 RP1 + β2 RP2 + ...

where:

RA is the expected rate of return of the asset in question,

Rfree is the rate of return if the asset had no dependence on the identified macroeconomic factors (free rate of return),

βi is the sensitivity of the asset with respect to the ith macroeconomic factor, and

Pi is the additional risk premium associated with the ith macroeconomic factor with RPi = Pi - Rfree being the actual risk premium.

Obviously, setting all the βi beyond β1 to be zero in the APT recovers the CAPM.

To use either of these models, the arbitrageur needs to set multiple free parameters (Rfree, RA, βi, Pi) using his judgement based on historical data and some of the aspects of this procedure will be the focus of this post.

For simplicity, we’ll limit the analysis to correlate one stock with one index and we’ll follow the excellent article entitled CAPM Beta - Definition, Formula, Calculate CAPM Beta in Excel by Dheeraj Vaidya for WallStreetMojo.  I’ll be adding only a few points here and there just to round out what Vaidya presented but it is worth emphasizing what a fine job he did in his presentation.

The correlation we’ll be exploring is between a company called MakeMyTrip (MMYT ticker symbol) and the NASDAQ Composite (^IXIC ticker symbol).  To match, Vaidya’s analysis, we confine our time frame from January 1st, 2012 to October 30th, 2014.  Yahoo Finance served quotes under the historical data link that presents itself after entering a ticker symbol (see green ellipse in the figure below)

Selecting the time span and downloading the data in CSV format are easy.  I read the data for MMYT and ^IXIC in pandas data frames but since the average price of the NASDAQ Composite over that time span was $3563.91 compared to an average of $18.98 for MakeMyTrip, plotting each time series on a common plot won’t work, even with a log scaling.  Instead, taking a page from Z-scoring in statistics, I made a plot of the normalized stock price for each listing in which the instantaneous price was divided the average.

There is no obvious correlation between the two time series. The NASDAQ Composite, more or less, rose steadily during this time span while MakeMyTrip shows a more of a parabolic behavior, with a downward trend during roughly the first third of the time span followed by minimum in the second third, and punctuated by rapid, and often volatile growth, in the third.

These differences in the qualitative evolution of the two assets presents itself even more strongly in a scatter plot showing the adjusted closing price of each asset.

Nonetheless, there is a reasonably good correlation between the two assets in terms of their fractional gain, defined as the difference between two successive days relative to the price of the earlier of the two days (i.e. (pi+1 – pi)/pi where pi is the price of the asset on the ith day).

There is a definite but weak positive correlation between the adjusted close of the NASDAQ Composite and MakeMyTrip.  A linear regression, computed using numpy’s polyfit routine (order 1), confirmed the same value of 0.9858 for the slope of a linear regression line that Vaidya reported.  This value is then the β between MakeMyTrip and the NASDAQ composite for this time span.

But the fun doesn’t stop there.  We can use the power of the pandas package to extend Vaidya’s presentation by randomly sampling the data to get an idea of the spread in the value of β based on using different samples due to differences in time span or reporting interval.  Running a Monte Carlo with 350 samples each (almost exactly half of the total number of available data points) for N = 10,000 trials gives the following statistics for β:

  • the mean was 0.9835
  • the standard deviation was 0.1443
  • the distribution of β values is normal

Using the standard techniques of statistical analysis, we might be inclined to report the beta value as β = 0.9835 ± 0.0014 or, said equivalently, β could lie in the range of 0.9807 and 0.9863 with the usual 95% confidence.   This uncertainty in the value of β is about 5.7% and this translates directly into a 5.7% uncertainty in the assessment of the assets rate of return.  A 5% uncertainty is likely to be a good rule of thumb for the arbitrageur in estimating whether he wants to look further at an asset.

Another source of error that arbitrageur must wrestle with is the value for Rfree, the risk-free rate of return.  According to Investopia.com, while a true risk-free rate of return is only theoretically realizable, the 3-month Treasury note is taken as a good proxy.  However, even this ‘sure fire’ investment vehicle sees movement on the secondary market.  The Wall Street Journal has excellent data and plots which show that, at least in recent months, the daily movement of the Rfree can be 5-10%.

The final ingredient in the CAPM model is RP, the additional risk premium associated with the asset.  The way this value is set is probably as much an art as a data science question since it not only has to account for the actual financial strengths and weaknesses of the asset but also the market sentiment.  If the example in last month’s blog were indicative, values ranging from 2-10% are reasonable.  The uncertainty in the estimation of those risk premiums are probably correspondingly larger, maybe in the 20-30% range.

All told, the estimated value for the real rate of return on an asset must account for all of these errors sources.  To illustrate this, lets continue on with the comparison of MakeMyTrip with the NASDAQ composite by assuming the following:

  • β = 0.9835 with a 1-standard deviation uncertainty of 0.0014
  • Rfree = 0.5% with a 1-standard deviation uncertainty of 0.025% (5% of the 0.5% value)
  • RP = 2.5% with a 1-standard deviation uncertainty of 0.5 % (20% of the 2.5% value)

With these assumptions, the CAPM rate of return would then be RA = 0.5% + 0.9835*2.5% = 2.9588%.  The corresponding error in that estimation is obtained using the usual propagation of error techniques and takes the value of 0.4943%.  This value means that the arbitrageur needs to figure in about 0.5% slop 67% of the time he undertakes this transaction.

All this machinery of linear regression, Monte Carlo simulations, and propagation of error explains the rise of algorithmic trading and the mathematical analysts (so-called ‘quants’) in todays modern market.

Financial Arbitrage

Last month’s column introduced the concept of arbitrage in which an asset is bought and sold near-simultaneously (the duration for which the asset is held can widely range, depending on the market perspective) in two different markets with the profit derived from the price differential.  Arbitrage functions to equalize price gradients across the market landscape, indirectly communicating information between buyers and sellers, thereby leading to a more efficient economy.   Of course, the parties engaged in arbitrage don’t set out to perform a useful service, they want to get incredibly rich, but seeking profit for themselves produces, essentially as a by-product, a societal good.  Basically, their savviness in producing a profit ensures that they will look for arbitrage opportunities with a diligence and innovativeness that someone simply hired for the job would never match.

The place where this ‘goodness’ is most fully on display is the financial market where likely billions are made in arbitrage each day and where the erasure of gradients across the economy serve the most people.  It is within this context, that this month’s column explores the concept of how to price a security or capital instrument so as to maximize profit and minimize risk.

To this end, this analysis will briefly explore two models: the Capital Asset Pricing Model (CAPM) and the Arbitrage Pricing Theory (APT).

Because a security’s price is essentially negotiated between the buyer and the seller at the time of the transaction and is not set by some outside force (e.g. Fred’s or Joe’s market in last month’s banana example), it is distinctly possible for an arbitrage opportunity to fail to net a profit.  In other words, despite the classical analysis to the contrary, arbitrage activities have risk.  How much should an investor be willing to pay to buy the asset and how much he can reliably sell it for become incredibly important.

In some sense CAPM is a special case of APT and, as a result both models share similar mechanics and strategies for minimizing risk while maximizing profit.  Let’s deal with the mechanics first.

In a financial arbitrage, the party engaged in the arbitrage (called an arbitrageur) first identifies a mispriced asset.  If the asset is too expensive, he sells it and uses the proceeds to buy another assets.  If the asset is too cheap, he sells something else and uses the proceeds to buy the cheaper security.  In both cases, a sense of relative pricing attaches when deciding which asset goes where.   In an ideal situation, both assets will be mispriced but it is likely that the arbitrageur has to settle for just one.  The purchased asset is then held for some time until it is relatively overpriced, at which point it provides the working fund for the next transaction.  It is important to understand that the sells that the arbitrageur enacts are typically short sells.

The strategy clearly centers around the identification of a mispriced asset relative to the market as a whole but since the asset is held for some time, called the period, the key feature is comparing the rate of return of the asset relative to other assets.  The measure of relative fitness is based on the response of the asset’s price to a host of systemic, macroeconomic risks, such as inflation, unemployment, and so on.  For each of these risk factors the risk-free rate of return of the asset is modified by a set of linear corrections.  In the abstract, this modification results from the following equation (adapted from Arbitrage Pricing Theory (APT) by Adam Hayes)

RA = Rfree + β1 ( P1 - Rfree ) + β2 ( P2 - Rfree ) + ...
&nbsp = Rfree + β1 RP1 + β2 RP2 + ...

where:

  • RA is the expected rate of return of the asset in question,
  • Rfree is the rate of return if the asset had no dependence on the identified macroeconomic factors (free rate of return),
  • βi is the sensitivity of the asset with respect to the ith macroeconomic factor, and
  • Pi is the additional risk premium associated with the ith macroeconomic factor with RPi = Pi - Rfree being the actual risk premium.

As in most things, it is much easier to understand this model with a concrete example (derived from Hayes’s article).  Consider an asset that depends on the following four macroeconomic factors (i.e  i = 4):

  • Gross domestic product (GDP) growth
  • Inflation rate
  • Gold prices
  • and the return on the Standard and Poor’s 500 index

Historic data are typically analyzed, according to the available literature, via a linear regression.  This process not only identifies the preceding four factors as the most important it also gives values for the sensitivity factor  and the premiums for each.  Assuming a free rate of return  = 3%, the data conveniently present themselves in the following table:

Macroeconomic Factor Sensitivity factor β Additional Premium P Risk Premium RP = P - Rfree β RP
GDP Growth 0.6 7% 4% 2.4%
Inflation 0.8 5% 2% 1.6%
Gold prices -0.7 8% 5% -3.5%
S&P 500 1.3 12% 9% 11.7%

Adding up each value in the last column and then adding the result to Rfree gives a value for the asset of RA = 15.2 %.

The list of APT macroeconomic factors commonly used include the ones listed above as well as corporate bond spread, shifts in the yield curve, commodities prices, market indices, exchange rates, and a host of others.  Basically, any factor in the economy as a whole that effects all assets should figure in as there is no way to mitigate these risks by diversification.

In the above example, the RP parameters were assumed a priori.  In his article Arbitrage Pricing Theory: It’s Not Just Fancy Math, Elvin Mirzayev walks through how to simultaneously solve for the βs to get what we are really after, the intelligently-derived expected return on the asset. (CFI’s Arbitrage Pricing Theory has a similar example that complements the previous presentation – financial gurus aren’t often clear in their explanations and having multiple sources helps.)  Once that is obtained, it is compared to the offered rate and, when the two differ sufficiently, the asset is ripe for arbitrage.

The Wikipedia article on APT and Mirzayev’s piece discuss the importance of developing a portfolio of assets against which to compare but these nuances, while important in the day-to-day implementation, don’t blunt the general idea of APT – namely that the value of an asset (as determined by its return) depends on various factors and can only be judged in relation to the market as a whole.

The CAPM differs primarily from APT by its use of a single factor (a single β) to capture the systemic market risk.  This aspect of the CAPM means that it assumes markets are perfectly efficient.  It isn’t as accurate but it is much easier to use and this one feature explains its staying power.

One final note, the devil really is in the details for much of this work.  In particular, it doesn’t seem as if there is a well-known discussion of the numerical stability of these results.  Given that the linear-regressions (typically multi-variate) are used to determine the betas and, consequently, the risk premiums, there seems to be room to determine just how much additional risk is buried within the algorithm.  But that is a blog for another day.

Market Inefficiencies and Arbitrage

Arbitrage: it isn’t an often-heard word when discussing the economy.  In fact, I consulted the indices of 6 textbooks in economics, covering micro or macro or both, and ranging from mostly qualitative to strongly mathematical, and found not a single entry; but its importance to markets can be hardly overstated.  In order to understand why this is, we need to first think a little about how markets work and the role of information in the marketplace.

A key observation is that markets work most efficiently when they are at a natural equilibrium, and their approach to equilibrium or even the equilibrium they assume can be impeded by insufficient information about the goods and services being sold.

For example, in Chapter 18 of his book Principles of Economics: Economics and the Economy Version 2.0, Timothy Taylor discusses how imperfect information can impede economic participation in each of the markets for goods and services, labor, and finance.  A person seeking to buy a used car is naturally wary about the quality of the car, about which they know very little and the seller knows far more.  An employer looking to hire a new employee is also naturally wary about the quality of the employee, because all that he can discern comes from a résumé and an interview.  (As a side note, this is why the coding interview, in which prospective computer programmers are given real problems to solve, exists as a hiring gate.)  Finally, a person seeking a loan from a bank has to contend with the bank’s inherent skepticism about the soundness of their repayment prospects, even if the person has an impeccable character where borrowing money is concerned.

These reluctances serve to slow down economic participation, push the equilibrium away from where it would sit in a market with perfect knowledge, and can lead to unintuitive situations where raising prices can actually raise demand rather than the other way around (that, however, is a post for another day).  Collectively, economists term all these ‘non-ideal’ market behaviors as inefficiencies.

A sad but powerful example of the kind of havoc uncertainties can wreak is summarized in Jamie Goldberg’s article Downtown Portland businesses, derailed by pandemic, say protests present a new challenge.  In the article, Goldberg quotes Andrew Hoan, president and CEO of Portland Business Alliance, as saying of downtown Portland:

It’s unique, it’s boutique, it has the best of all kinds of experiences for customers and for employees and for employers, and it’s devoid of that now because of the uncertainty.

Markets have developed lots of different ways of dealing with inefficiencies and the risks that follow.  Some of the more well-known ones are guarantees, certifications, and insurance and premiums.  Interest rates on loans are structured to provide the lender some insurance against the default of the loan as seen in the usual formula:

Interest Rate = Risk Premium + Expected rate of inflation + Time value of money

The last two terms collectively account for the simple fact that a dollar spent today provides more utility than a  dollar spent tomorrow because 1) inflation eats away at the purchasing power of money (‘Expected rate of inflation’ term) and 2) the enjoyment derived from a good or service is less when one has to wait for it (‘Time value of money term representing delayed gratification).  Since both of these effects are known beforehand, they attach to any transaction.  The first term (‘Risk Premium’) represents all of the uncertainty brought on by the lack of knowledge about the transaction (does the good have high quality? is the borrower going to pay it back? and so on).

The mechanism of arbitrage is another powerful way for the markets to deal with some of these inefficiencies by making it profitable for traders to equalize information between all parties.  It just isn’t as broadly familiar.

In a nutshell, arbitrage is the purchase and subsequent sell of some good (typically called an asset) in order to profit from a positive difference between the final market’s price and the asset’s price in the original market.

In theory, the exercise of arbitrage offers zero risk because the resell is instantaneous and the receiving market can accommodate the amount being resold.  In reality, nothing is truly risk free, and a number of complications can arise that blunt the attractiveness of arbitrage.

For example, suppose that bananas sold for $1.00/pound in Joe’s Market but $1.40/pound in Fred’s Market elsewhere in town.  Then a person can possibly make money by purchasing a supply of bananas at Fred’s and transporting them to Joe’s market.  In this fashion arbitrage eliminates or, at least, helps to lessen imbalances in the economy caused by a lack of information (since if shoppers knew they could get bananas cheaper at Joe’s than Fred’s they would, all other things being equal, shop for bananas at Joe’s).  Arbitrage also facilitates a better match between supply and demand, again smoothing out imbalances caused by lack of information and other factors.  However, it is important to realize that arbitrage is distinct from distribution by a middle man even if they share some aspects.

Many real world factors contribute to making this typical introductory example more complicated than it might seem at first glance.  The primary complication is that the time needed to purchase, transport, and subsequently resell the goods must make it worthwhile engaging in this form of arbitrage.  The profit earned on the resell must be great enough to outweigh the transportation cost, regulatory fees, and the opportunity costs in order for people to engage in it.  These barriers are why we don’t typically see parties engaged in retail arbitrage.

As the internet-of-things has made the flow of information incredibly easier, it is now possible to find people talking about their retail arbitrage efforts moving product from brick-and-mortar shops for resell on Amazon and Ebay.

Of course, retail arbitrage is still a rare thing not only because of the resell risk but mostly there are more efficient ways for most of us to make money without the ‘hustle’.  Far more common and more important is the use of arbitrage in a macroeconomic setting where it is used smooth out inefficiencies in the financial markets.

In the coming months, this column will explore some of the aspects of arbitrage in the macroeconomic setting, how arbitrage activities tend to cause prices in different markets to converge, and what may happen when arbitrage opportunities are frustrated.

The Business of Comics

Economists long for those perfect case studies that work so well in illustrating an abstract point of economic theory by presenting a real-world example full of human drama and marketplace interactions.  And, while it is still too early to tell whether the latest news from the comic book industry will become a featured story in econ textbooks years from now, it certainly has all the fixings.

The news centers on an interesting development that just reared its ugly head in that microcosm of the entertainment world that produces the majority of the world’s comic books (or sequential art for the more refined): after a 25-year exclusive relationship, DC Comics has decided to part ways with Diamond Comic Distributors.

To understand the ramifications involved, a history of how comic books are sent to the market is in order.

For much of their history in the United States, comic books were considered as just another periodical, and the primary method by which they were sold to the end consumer was through the newsstand.  The newsstand owner negotiated quantities with the publisher and distributor months in advance.  He displayed the books for a time, and split the revenue with the partners.  Since the units were provided on consignment, any unsold copies were then returned to the publisher (usually, to save on shipping, only the covers were returned with the proviso that the newsstand destroy the interiors).

Marvel Comics began to explore the concept of a direct market for comics in the late 1970s under the direction of Jim Shooter.  The primary innovation was that the comic books were sold to a store well below cover price and the store would subsequently sell the product at cover price netting, as revenue, the price differential.  Since the store owned the product outright, any unsold units remained in its possession.  The genesis of the direct market is marked in late 1980, with Dazzler #1 being the first regular, monthly comic being sold exclusively in the direct market.

The store generated a greater profit per unit than at the newsstand, but it also ran the risk of buying a product no one wanted that languished on the shelf.  Comics publishers also helped to mitigate the risk by publishing stories with large crossover events that linked poorer-selling titles to better-selling ones and by writing stories that spawned interest in back-issue purchases.  The consumer enjoyed the advantage of being able to sample the book before purchase, but at the cost of paying the markup.

This approach put a premium on distribution.  After some settling-in time, there emerged three companies that handled the majority of comic book distribution:  Capital City, Diamond, and Heroes World.  Riding high on the comics boom of the early 90s, Marvel Comics bought Heroes World Distribution in 1995 to exclusively distribute their product.  Diamond responded with exclusive deals with Darkhorse, Image, and Archie Comics and Marvel’s main rival DC Comics.  Left out of the main action, Capital City soon after sold out to Diamond.  An overextended Marvel then filed for bankruptcy protection in 1996, effectively ending Heroes World Distribution in 1997 and leaving Diamond as the only game in town.  Comic Tropes’s video How Distribution has Saved and is Now Killing Comics gives a comprehensive summary of this history, including some details about the anti-trust litigation not discussed here.

It’s hard to tell what impact this monopoly has had on comics over the years.  Sales figures of US comics to the direct market show a growth from about $15 M to $27 M in the twenty years since (taken from Comichron’s sales summaries), which amounts to a about 15% growth, after inflation.  Spending on general entertainment was closer to 30% over a similar timespan, but comics has to contend with the emergence of a host of substitutes including manga, anime, and videogames.  Comic book publishers resorted to an ever-increasing array of gimmicks to try to lure more customers in (rebooting of a series or a whole universe, tie-ins with movies, changing the lead character, etc.), but often stores were stuck with hundreds of back-issues with no way to unload them.  Susana Polo’s video, The BEST WAY To Buy Comics!, presents some of the ways in which the direct market distorts comic book creativity.

This delicate balance continued for over 20 years until recently, when DC Comics broke from the fold, and the interesting question is, why?

According to Peter David, a long-time comic book creator who got his start in sales at Marvel, this move is DC’s way of declaring war on its long-time rival.

One can certainly understand David’s interpretation given the fact that DC makes up about 30% of the total US comics market share ($8-9 M versus $11-12 M for Marvel and $28-32 M overall in 2017), but before concluding he’s correct let’s explore the economics of the situation a bit deeper.

First, the basic facts about the new arrangement.  DC is going with three distributors instead of one:  UCS Comic Distributors will handle the east coast, Lunar Distribution will handle stores in the west, and Penguin books will provide graphic novels and collections (not monthly comics) to US bookstores.  In the Bleeding Cool article Stagnant DC Sales, Diamond Plans and What Happens Next – The Gossip, author Rich Johnson points out that, while Diamond has a transportation hub (Diamond UK) in England that enables them to service the European market with US comics in a cost-effective way, neither UCS nor Lunar do.  One of the UK stores mentioned in the piece says that they now have to purchase comics at above cover price due to the high cost of shipping, and Johnson also notes that, even though the UK makes up 10-15% of Diamonds overall distribution, DC owns the lion share of that slice.

Second, some feedback from DC is in order.   In his article DC Comics Admits Comics Have “Sustained Stagnant Growth” In Decision To Cut Ties With Diamond Comics Distributors in Bounding Into Comics, John F. Trent covers the email that AT&T-owned DC Comics sent to retailers on June 5, 2020, in which they admitted that “sustained stagnant market growth” figured into their decision to part with Diamond Comics Distributors.  The article cites an interview with Mark Gallo, the owner of Past Present Future Comics.  Gallo’s said that

[t]his new entity won’t have any incentive to provide terms in my opinion. I personally have 28 day terms with Diamond and I’m assuming I’ll be cash on delivery with this new distributor. This looks really bleak.

Gallo also added that they're

[b]laming a distributor instead of their unsellable woke trash product. Total deflection. Comics fans want good stories and art not a laundry list of woke writers interjecting their politics into character development and storylines.

Johnson believes the gossip on the street supports Gallo’s view that the product itself bears blame.   Johnson notes that Pamela Lifford, President of Warner Bros. Consumer Products, has no love for DC Comics and that she views them as costing too much to make (production, labor, and time) and would rather have DC focus on their graphics novel line and bookstore market.

We are now in position to better understand the possible reasons DC split from Diamond and to judge Peter David’s assertion that this is a declaration of war by DC on Marvel.

  • Suppose, for the sake of argument, that DC does drive Diamond out of business. Wouldn’t either UCS or Lunar decide to take on the displaced companies (Marvel, Image, Dark Horse, IDW, Boom, and so on)?  They would be fools not to grow their business regardless of how disgruntled DC might be.  Also Marvel Comics might be logically viewed as the development arm for the movie juggernaut that is the MCU so it is unlikely that it would be allowed to go under.  We conclude that it is highly unlikely that this move represents war on Marvel.
  • Perhaps then it is a war on Diamond. This interpretation also seems thin as DC is directly cutting off its nose to spite its face with regards to its UK market.  If DC’s comics are going to increase in price overseas, there will be a downward push on demand, and Marvel and the others may fill the void.  Also, unless Lunar and UCS offer even lower prices, there is a distinct possibility that shops will lower the demand on DC as well.  Given that the company has not managed to grow their comics revenue, and their attempts at a movie universe to compete with the MCU have been far from successful, it is distinctly possible that Diamond will be able to reestablish the lost DC revenue as other publishers increase their market share.
  • So, it seems that the only likely conclusion is the one hinted at in Johnson’s article: DC executives want to lower cost in the unit and they don’t care if the market share goes with it.  Since sales are stagnant the only ways to increase profitability are to either cut product or cut operating costs.  In going with their new distributors DC must be achieving a major savings in order to justify the risks involved.  If the line of DC Comics fails, they can always say they tried, and then they can live for years off of the graphic novels and collected stories while they rebuild a better and cheaper workforce.

No matter what interpretation one finds fits the facts best, there is no doubt it will be an interesting ride.

If Eastman Were Alive Now

Sometime back I wrote an article arguing how Kodak lost focus on its vision and went from one of the most significant companies in the world to a small provider of a niche product (How Kodak Went So Wrong – January 23, 2015).   The basic premise of that argument is that if Kodak had kept true to George Eastman’s original motivation for producing photographic film – namely that he wanted to enable people to ‘capture memories’ as easily as possible and film was the means to that end during his lifetime – then Kodak would have transitioned to digital and remained a world business leader.  The movement from film to CCD-based digital ‘memory capturing’ in the 1980s would have been as logical a progression as the transition from plate-photography to film had been in the 1880s.

From a purely economic perspective, the current COVID-19 pandemic is providing valuable insight into how companies are positioned to either rise or fall based on their business savvy and agility in adapting to these dynamic and unpredictable events as they unfold.  Economists and business analysts will be able to write papers for decades to come examining each and every sector of the economy.

That said, this post will engage in a little alternative history and counter-factual conjectures in asking what would the modern landscape look like if Kodak had been able to hold on to the business sense and entrepreneurial spirit that George Eastman had in such abundance.  To this we are going to make an admittedly radical assumption in imagining that Kodak’s chemistry wing managed to produce a fountain-of-youth serum that only worked on its founder.

Published by B. C. Forbes Publishing Company, New York, 1917 - https://archive.org/details/menwhoaremakinga00forb, Public Domain, https://commons.wikimedia.org/w/index.php?curid=19172293

Our alternative-universe story begins in 1929 when George Eastman was 75 years old.  While tinkering on improved roll film, Kodak’s chemical division accidentally creates a noxious compound whose merest whiff causes violent headaches, nausea, and vomiting.  Trying to isolate what happened, the lead chemists realize that they can’t quite duplicate the experiment but that what they’ve created is a highly volatile and potent poison.

Learning about the lab accident, our alternative Eastman, who is already beginning to suffer from the spine ailment that would drive the real Eastman to suicide in 1932, decides that, if he is to take his own life, he would prefer to do so with this one-of-a-kind toxin that his company has produced.  Sneaking into the lab late at night, Eastman quaffs the poison and collapses, thinking, as he loses consciousness, that the end is nigh.

Imagine his surprise when he awakens hours later, his spinal pain completely gone and his age regressed until he looked and felt as he did in his mid-thirties, around the age when he had developed the Kodak Black camera.  Restored to his prime with vigor to spare, he resumes his role of steering on one of the largest companies of his time.

Now filled with inexhaustible youthful vigor, he tackles the new technology of that era: quantum mechanics.  His long-standing interest in chemistry is supplanted by this new science that underlies it.  It’s a slow go but just as he is starting to master the subject conceptually, World War II breaks out.

Under his leadership, Kodak supplies aerial photographic support to the Allies Intelligence apparatus.  After the war, Eastman has the company build upon the technical innovations it produced during the war and the good will that came with serving the country during its time of need, further positioning Kodak as a go-to company that makes life better.

Now believing that an even better experience awaits his consumer base, Eastman has Kodak develop a research branch focusing on the science of optics and electronics.  He backs a partnership with Bell Labs and directs his technical staff to stay abreast of developments in the field.

The critical juncture takes place about 25 years after the end of the war.  Like in our own timeline, the late 1960s find Boyle and Smith making the first charge coupled device at Bell Labs, followed shortly after by Steven Sasson, a Kodak employee, developing and patenting the first CCD-based camera in 1975.  However, unlike our own timeline, with Eastman at the helm, Kodak quickly jumps on commercialization and begins to gather market share with its digital photography.

In 1984, the alternative timeline Kodak eagerly agrees to the official film of the Los Angeles Olympics.  This move allowed Kodak to keep rival Fujifilm at bay while also enabling the corporate giant to again use favorable public sentiment to its advantage in promoting its new digital photography offerings.

The time of crisis now passed, Kodak then steam rolls into the modern era.  Eastman’s vision of putting ‘the what’ (capturing memories) before ‘the how’ (photographic film) allows Kodak to nimbly respond to the ever more rapidly changing market.

By the late 1980s, Kodak has partnered its CCD-based technology with Sony to make a consumer camcorder second to none.  By the 1990s, recognizing how the internet would allow a person to share the memories he had captured with Kodak cameras, Eastman guides the company to invest heavily in the internet.  Kodak develops, patents, and licenses streaming technology years ahead of what was developed in our own timeline.  By the mid 2000s, Kodak, now recognizing the move towards miniaturization and consolidation desired in the consumer telecom industry, beats Apple to the invention of the smartphone.  Finally, capitalizing on the growth of broadband internet and increasing speeds, this alternative Kodak corners the market on teleconferencing and collaborative applications like Zoom, Webex, or Adobe Connect.

When the COVID-19 crisis hits this alternative timeline, Kodak, already a household word, is able to further cement its reputation in the eyes of the consumer as the company that helps make, capture, and share memories with each other while staying safe.

While it is true that the foregoing is a work of hypothetical fiction with no way to either prove or disprove its veracity, it is also certainly true that at least some of the events narrated would have actually been within the Kodak’s grasp had they simply kept true to the vision of George Eastman.

The Good, The Bad, and the Corona

Well it is entirely obvious by now that life in the USA has changed due to the corona virus’s clutch on the world as a whole.  In these seemingly desperate times, as in similar crises, there is always a bit good mixed in with the bad and some other things worth commenting on as well.  Let’s start with some aspect of the good.

The scene is 5:30 am on a Tuesday morning.  I usually get up this early but ordinarily I stumble into my home office and look for research ideas or inspiration for a new blog.  This day I did nothing of the kind.  Shuffling off to the bathroom, I ran a comb through my hair, freshened my face, and changed from pajamas to street clothes.  Slipping out of the bathroom I went downstairs, fetched my fob, and at 5:45 am left my house.  My destination, the local supermarket, lay some ten minutes away.  When I arrived I queued up behind the dozen or so people there before me, each keeping a 6-foot  buffer between himself and his neighbors to the front and rear.  A little after 6 am, the store opened and we all somberly entered in single file.  Most, if not all, of us went straight down the paper products aisle looking for that one commodity that is to our modern situation what gasoline was to Mad Max - toilet paper.  It was eerie and surreal to walk through an area of the store that until 2 months ago held an abundance of products to find just under a hundred packs of rolls that were mostly scooped up by myself and my fellow early-morning shoppers.

There are many  good aspects of this sorry situation but I’ll only comment on three.  

The first is that, despite the stay-at-home orders and the general shuttering of the economy, the American can do spirit has not entirely withered.  There are still manufacturing activities going on in the country.  The supply chains may be clogged but are not stopped and we still enjoy such a high standard of living that was entirely inconceivable a century ago.   

The second is contextual and may not come home to everyone, even though it should.  What we are experiencing with these various shortages is a small foretaste of what socialism would be like if we embraced it.  Long lines, empty shelves, and desperation are always the earmarks of socialism and communism.  No country on Earth, even the so-called socialist scandanavian nations, can have a vibrant economy under socialism.  Denmark and Sweden (and probably the others in the fever dreams of politicians who believe in a Nordic utopia) have clearly rejected the label of socialism and pointed to their free-market practices.  And well they should, because free-market practices are what fill shelves with toilet paper, sugar, napkins, ground beef and so on.  And, touching on my first point above, we can see experientially just what happens when the market is not free and, hopefully, this will be the worst we’ll ever see.

The third is far more prosaic dealing with substitution as a by-product rule.  Economists like to point out that when supply is low and demand is high and prices rise, consumers will substitute similar alternatives for the good they usually purchased.  For example, people might switch to ground turkey if beef prices sharply increase.  I think economists should have a field day with papers galore based on what I have observed.  Everywhere I went in the supermarket, there were shelves totally missing contents next to shelves brimming with products very few wanted.  I know that I have tried new items that I ordinarily wouldn’t have purchased but it seemed that even in crisis, choosy mothers were finicky about what foods they were allowing in.  It would be fascinating to see a breakdown of what threatened people still wouldn’t touch and if the buyers of the various chains change how they purchase based on these observations.

On the bad front, I’ll focus only on one thing but a really bad one.  The nation’s governors, mayors, and elites seem to have let, in far too many instances, power go to their heads.  The textbook example is probably found in Michigan where the following table compares the do’s and dont’s, courtesy of governor Megan Whitmer,  

Do Don’t
Purchase liquor, lottery tickets, and marijuana Purchase seed, paint, and rugs
Go boating with a canoe, rowboat, or kayak Go boating with a power boat or jet ski
Get an abortion Get a biopsy or joint replacement

Louisville, KY Mayor Greg Fischer comes in a close second when he ordered churches to cease ‘drive-in’ services where each car was at least 6 feet from neighboring ones but wouldn’t ban drive-through food pickup, where the distances between strangers was much closer and number of direct-interactions much higher.  I challenge anyone to find the logical rhyme-and-reason of these allowances and prohibitions.  The table listings smack of lobbyist influence and crony-capitalism.  Milton Friedman certainly seems vindicated in his belief that big government exists to grant favors.  In addition, all sense of cost-benefit analysis and awareness of hidden costs seems to have gone out the window in shuttering the national economy.

Sure COVID-19 seemed like the super-flu ‘prophesied’ in Stephen King’s The Stand back at the beginning of March but now the emerging evidence seems to indicate that the communicability of the disease is much higher and the lethality a lot lower.  Still cries persist that even one life lost is too many.  What utter nonsense.  Below is a table adapted and supplemented from CDC data indicating how people died in 2017.

Cause of Death Number of Deaths
Heart Disease 647,457
Cancer 599,108
Accidents 169,936 (including 37,133 traffic deaths)
Chronic Lower Respiratory Diseases 160,201
Stroke 146,383
Diabetes 83,564
Influenza and pneumonia 55,672
Nephritis, Nephrotic Syndrome, and Neprhosis 50,633
Suicide 47,173
COVID-19 (as of 4/24/20) 44,973

 

I get that social distance impeded the immediate spread (although the Chinese Communists could have nipped it in the bud if they hadn’t lied) but let’s get people back to work.  We don’t shutter the economy because over 600,000 people die of heart disease, no doubt aggravated by working in close proximity to other people.  The unseen cost of keeping the economy moribund will cause more addictions and more suicides for years to come.  

I’m not the only one advocating for a measured approach to the risk imposed by COVID-19.  Heather Mac Donald, in her article The Deadly Costs of Extended Shutdown Orders, argues quire convincingly that focusing on saving “just one life” effectively does more harm than good and that our governing elite are using anything but the science of risk analysis to make policy.

I’ll end on an ugly note, since the blog title suggests a more than passing similarity with a famous western.  The behavior of my fellow man can be very ugly, despite certain philosophers claiming that tragedy and crisis bring out the best in people as it shakes them from their complacency.  The scarcity of toilet paper could be understandable as a supply-side problem if I didn’t see a neighbor 3 streets away try to scurry into her home in the early hours last week.  With two 20-packs of toilet paper under each arm and another 20 pack in the trunk one has to wonder if she eats it or has she simply given into panic and fear and is hoarding.  Let’s just say that my answer to that question doesn’t favor toilet paper as any part of the food pyramid. 

 

Economics and Ergodicity

This month, I came across a very interesting article about a proposed resolution to what the author regards as a long standing problem in economics.  The basic point of the paper, which is entitled The ergodicity problem in economics by Ole Peters (Nature Physics, Vol. 15, December 2019) is that classical economic analysis is fundamentally flawed.  According to Peters, the fatal mistake made for hundreds of years is the ergodic assumption that equates the time average of an economic process (say investing) by an individual to the average of the same process across an entire population, at a given time.  Determining whether this assumption holds is extremely important if economists want to be able to model what the average person will do.

Ergodicity is a concept originating in the branch of physics known as statistical mechanics.  Statistical mechanics seeks to characterize physical systems that possess vast numbers of moving parts in terms of a vastly smaller set of parameters.  Evolution of a complex system is generally described in terms of how the averages and standard deviations associated with all these parts change in time.  By assuming that the system is ergodic, the physicist can state how a system will evolve in time simply by looking at the average over multiple copies of the system at an instant in time.

An example will help make some of these ideas more concrete.  A typical ‘simple’ physical system with a vast number of moving parts is a bottle of water.  Describing this bottle of water at the supermarket is absurdly simple, one merely specifies the amount of fluid (250 ml, 500 ml, etc.) and the temperature.  If one wanted to be fancy, one could even specify the percentages of trace elements bringing the number of parameters, say, up to 100.  Despite the fact that 100 is a relatively large number of things to track, it’s still vastly smaller than the number of parameters needed to describe the bottle at a molecular level.  In a 500 ml bottle, there are approximately 1.86 x 1025 water molecules or about 9.3 trillion trillion molecules for each dollar of federal debt and each requires, at a minimum, 7 numbers to describe its motion.

Once the bottle is bought and brought home, it will have its own local history.  It may be placed in the refrigerator or left in a hot car; it may be opened and partially or totally drained or kept shut for a later consumption; and so on.  Ergodicity assumes that each of the bottle’s observed states, as it evolves in time, can be matched with a single bottle in a large population of differently prepared bottles at a given time.  An unopened 500 ml bottle that warms from 5 to 20 C can be thought of as first visiting the state of an identically-sized bottle that is held at 5 C, then a different 500 ml bottle held at 5.5 C, then yet another bottle of the same size held at 6 C, and so on.   In this way the time average of the single bottle’s temperature can be derived from an average over a population or ensemble of bottles each kept constant at its own particular temperature. Alternatively, the large population’s statistics may be derived by taking a time average of a single member.  Which direction (time-to-ensemble or ensemble-to-time) depends on the physical system and the experiments being performed.

The ergodicity assumption has been quite successful in thermodynamics but Peters contention is that the types of dynamical systems found in an economy do not share this feature with the dynamical systems found in nature.  To support this claim, he offers a simple gambling model that will be explored in the rest of this column.

In Peters’s model, a person can participate in a repeated wager where 50% of the time he may increase his wealth by half and the other 50% of the time he will lose 40% of all that he has.  According to Peters, classic economics would predict that the potential gambler would jump at this chance.  The gambler’s enthusiasm derives from his analysis, using classical concepts from economics, the fact that the expectation value for this gamble (average gain or loss, denoted by E(gamble)) would result in a 5% gain since

E(gamble) = Prob(win) Payoff(win) + Prob(loss) Payoff(loss)

                  = 0.5 ( 0.5 - 0.4 ) = 0.05

where the notation Prob(win/loss) = probability of winning or losing (0.5 for both), Payoff(win/loss) = the outcome of a win or a loss (0.5 or -0.4 for a win or loss, respectively).

Peters points out that no rational person would actually agree to this gamble and thus the disconnect, he argues, between classical economic predictions and observed participation in the economy.  This is where ergodicity comes in.  Basically, the average person understands intuitively that this gamble, despite its constant positive expectation value as a function of time it is not ergodic.  That is to say that the time average of a gambler’s wealth, assuming he repeatedly plays, doesn’t result in a roughly constant 5% increase but rather it leads to ruin.

The article presents a rather disturbing graph in which the wager is simulated as a random process for 50 members of the economy who participate in repeated goes at the same gamble.   My own reproduction of this process using 150 members is shown below.

Each of the grey lines represents the time evolution of the relative wealth of a single gambler who repeatedly engages in the Peters wager.  The black line is the average over all the gambles - the time evolution of the ensemble average.  If ergodicity held, then this black line would equal the red line.

The bulk of Peters’ article is a sophisticated analysis why ergodicity fails to hold and under what conditions.  It is a difficult read but likely very important in revising economic theory.  But, regardless of how important the technical details that emerge may be, an even more important point will be understanding how the human participant, lacking all of this specialized expertise, understands to stay away from this type of gamble.  Insights into this last point are likely to be more profound than the underlying mathematics.

Ants, Grasshoppers, and Student Debt

Ordinarily this column stays away from politics with a capital ‘P’. While economics necessarily borders on politics, defined as the basic interaction between people, this column tries hard not to call individual Politicians nor to take a partisan position. That said, the recent interaction between Elizabeth Warren and an unnamed Iowa father is worth discussing in that it brings into sharp focus the old point of Frederick Bastiat about what is seen and unseen in the economy.

The interaction is shown in the following YouTube clip (first minute is sufficient).

The dialog is short and worth repeating in print here.

Iowa Father: I just wanted to ask one question. My daughter is getting out of school. I’ve saved all my money. She doesn’t have any student loan. Am I going to get my money back?

Warren: Of course not.

Iowa Father: So, you’re going to pay for people who didn’t save any money and those of us who did the right thing get screwed.

Warren: No, you’re not getting screwed.

Iowa Father: Of course we did. My buddy had fun, bought a car, went on vacations. I saved my money. He made more than I did. But I worked a double shift, worked extra - my daughter worked since she was 10.

The interaction got hotter after that but the main point of the Iowa Father made is correct. This observation certainly flies in the face of those who look only at the seen cost of student debt.

There are clear problems with young people holding student debt. In many cases students have been sold a bill of goods that a college education is simply the only way to get ahead and make good money. Plumbers, mechanics, and HVAC technicians all over the country would no doubt laugh at the idea that you need a college education to make money. In addition, graduates across the land are probably lamenting the fact that $50,000 of student debt and a degree have not led to the promised land.

This problem pairing of a bad degree with burdensome student debt is only exacerbated by the fact that students can even get out from under by declaring bankruptcy, which would be a fine remedy since the students would simply bear a different cost.

But the sympathy that we all may have for the plight of the bad-degree/burdened-by-debt individuals (a seen cost) should not blind us to the unseen cost that that Iowa Father pointed out.

The Iowa Father acted like the proverbial ant in Aesop’s parable of the ant and the grasshopper. In the original parable, the ant saved food during the summer against the lean days of winter. In contrast, the grasshopper frolicked and danced, putting nothing aside for any dark days. When winter descends, one finds the grasshopper begging the ant for food and the ant refusing the grasshopper to share in the fruit of his labor.

The Iowa Father did the right thing. He scrimped and saved. He forewent fun, new cars, and vacations. He delayed his gratification in order to secure his daughter a college education unencumbered by student debt. His buddy was like the grasshopper. He spent and he idled and his kids now have to live with student debt.

But just like some people want to rework the original parable to point out the lack of compassion of the ant (as if the ant were a villain and not a sober individual who recognized the need for thrift), some people only want to focus on how bad it is to have student debt. Suppose we, as a society, chose to forgive that debt. Any compassion one might be showing to the student and his family is far outweighed by the callousness and lack of compassion we are showing to the Iowa Father.

He is bearing two unseen costs and we are effectively telling him that his work, efforts, and his virtues are all reasons he should be enslaved by the state.

The first unseen cost is opportunity cost he bore while saving for his daughter’s college education. Think of all the merry times he not only missed but will never be able to experience again. There must have been a trip, perhaps to Disney World, with his teenage daughter that he passed over in order to put money aside. Probably, he can afford the trip it now but his daughter is much older and the moment is lost. He also was denied the fond memories of many family outings or the enjoyment of a new car, all of which he sacrificed so that she graduated without debt.

The second unseen cost is now the increase in taxes and cost of living that this fellow must endure so that the creditors can have their loan returns met while the profligate takers of those loans skate free. This step adds insult to injury making him not only pay for his own sacrifices but also making him sacrifice for the gains the other family enjoyed.

By ignoring the costs born by the Iowa Father, what we are effectively saying is that there is a certain class of people in society who deserves to be oppressed and exploited and that this class is exactly the group that should be rewarded and held up as an example. We’ve turned the world upside down because forgiving student debt not only victimizes the “ants” amongst us it incentivizes the “grasshoppers” to be even more prodigal.

In short, the Iowa Father was correct - he is getting screwed.

Black Friday

Well another Black Friday is upon us.  Once upon a time, when I was younger, I actually viewed Black Friday as a special day to go out and enjoy the hustle and bustle of shopping, to see the Christmas decorations festooning every store, and to buy gifts for the loved ones in my family.  As I got older and became a bit more cranky, the lust and obsession exhibited by certain people began to weigh me down and cause me to think that Black Friday was to be avoided at any cost.  As I’ve gotten even older and more educated about economics, I’ve come back around to liking it but for different reasons.

When viewed objectively, Black Friday is quite an economic miracle.  Starting, some time after Thanksgiving (the exact time seems to change every year), millions of Americans make hundreds of millions if not billions of economic choices in just one day.  Stores have to plan and prepare for this bacchanalia of bargain hunting by answering a host of questions.  These include:

  • which items should be stocked,
  • how many of them should be ordered,
  • at what price should they be sold,
  • how much should be sent on advertising,
  • and so on.

What is truly remarkable is that no central planner set this organized insanity up.  No elite intelligence manages all of the variables for each and every institution.  Rather the invisible hand of capitalism operates on a massive scale.  Every step from discovering and processing raw materials, to designing a product that people want, to the manufacture, shipping, distribution, and retailing of the good, is done by an intricate, complex web of self-interested decision making.

It is in this way that Black Friday really is a descendent of those first lessons from Thanksgiving: the abandonment of the ultimately destructive rot of forcing shared work and outcomes and the embrace of achieving the satisfaction of earning your position by hard work.  Stated simply, the creation and participation in a free (emphasis on free) market.

So, it was with great disappointment that I read the Thanksgiving section (Section 3) of James W. Loewen’s book Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong.

Loewen starts his section with a variety of quotes, obviously intended to set the overall tone about the myth that citizens of the United States entertain about the significance of the Thanksgiving.  The three most provocative quotes are:

Considering that virtually none of the standard fare surrounding Thanksgiving contains an ounce of authenticity, historical accuracy, or cross-cultural perception, why is it so apparently ingrained?  Is it necessary to the American psyche to perpetually exploit and debase its victims in order to justify its history?

- Michael Dorris
European explorers and invaders discovered an inhabited land.  Had it been pristine wilderness then, it would possibly be so still for neither the technology not the social organization of Europe in the 16th and 17th centuries had the capacity to maintain, of its own resources, outpost colonies thousands of miles from home.

- Francis Jennings

and

The Europeans were able to conquer America not because of their military genius, or their religious motivation, or their ambition, or their greed, they conquered it by waging unpremeditated biological warfare.

- Howard Simpson

Sigh…, where to begin with the material fallacies that abound in each of these arguments.  To start with a general observation that each of these quotations address points that have nothing to do with Thanksgiving’s root but rather what each commentator perceives as a modern corruption.  It is okay to criticize the modern corruption but it in the spirit of charitable argumentation, each of them should have discussed, at least in passing, the original reason for celebrating Thanksgiving.

Now on to the individual quotes.

Michael Dorris’s quote is distinctly sloppy in failing to define ‘standard fare’.  What exactly does he mean?  Perhaps the Macy’s day parade or the Black Friday advertisements or even something he saw on TV.  How hard would it have been to say something like ‘… none of the standard fare, which maintains…’?  And in what way is celebrating the core fact that the Pilgrims eschewed socialism have anything to do with ‘exploiting and debasing America’s victims’?

Francis Jennings’s quote misses the point of the proper roots of Thanksgiving.  Yes, on the surface, everything Jennings says is true; Europe could not maintain an outpost colony in the new world.  William Bradford said as much in his writings.  He bemoans the fact the colony has to stand on its own two feet while trying to live under the ‘socialist requirement’ levied by the Company of Merchant Adventures of London, who backed the enterprise.  He recognizes the weakness of the arrangement when commenting on the colony’s lack of success in 1622 and early 1623.  Finally, Bradford directed the colony to abandon the communal property arrangement in favor of individual rights and obligation.  Bradford goes on to say the new arrangement ‘had very good success, for it made all hand very industrious’.  His analysis of what went wrong under the communal arrangement was that it was ‘found to breed much confusion and discontent’.  It was the abolishment of this terrible communal arrangement and the success adopting individual rights that is the real story of Thanksgiving, a story that Jennings’s quote (and all the others) ignore.

Simpson’s quote is the most egregious of the lot.  The Pilgrims were neither militaristic nor were they particularly religiously motivated to conquer a new land (they came to the America’s because they had to escape the religious persecution they faced in Europe).  And, as Bradford’s narrative attests, they originally had no ambition and no greed under the communal arrangement.  All of these points may apply to other colonies at other times, but they are mismatched with Thanksgiving as the subject.  Still, I may have been able to overlook these flaws but for the last sentence.  It defies common sense to believe his last assertion; none of the European settler’s would have been happy to bring disease to the New World; it would counter-productive since they would have to worry that the disease would turn around and attack them.  This claim is particularly baseless considering the devastation that Europe bore after the Black Plague ravaged the land.  Judging the Pilgrims through a lens of modern biology is simply ridiculous given that the germ theory of disease was a discovery of the late 1850s, centuries after the colonization began and at least 80 years after the ratification of the US constitution.

There isn’t much to recommend the rest of the section as well.  Loewen engages in a variety of material fallacies of his own, including a equivocal use of the term ‘settler’, an ad hominem attack on WASPs, and an over-emphasis on the diseases that tragically ravaged the Native American population. But, perhaps, the most tragic thing about Loewen’s presentation is his failure to recognize and celebrate the triumph of individual rights over collectivism.