Uncategorized

Real Questions on Real Wages

At first read, a recent article entitled ‘Real wages are essentially back at 1974 levels, report shows’, by Daniel B. Kline of the Motley Fool, argues persuasively that wages, after accounting for inflation, have only just recently returned to the ‘purchasing power’ they had in 1974.  However, after some reflection, his argument stirs up a lot more questions in its wake – questions that suggest that the real situation is a lot more complicated and nuanced.

Central to Kline’s argument is the idea of purchasing power and the associated statistics that show its trend over time.  To define purchasing power, Kline starts his piece with the following:

If you get a $1,200 annual raise on the same day that your rent goes up by $100 a month, you don’t need an accountant to tell you that you didn’t actually make any financial progress

Of course, Kline is right on this point.  The absolute value of wages is totally irrelevant.  The important factor is the ratio of wages to the things that they can purchase.

But how should that ratio be measured?  Each of us values different things.  Each of us purchases different items.  A statistically accurate study would analyze a cross-section of workers to determine what they could purchase as a fraction of their take-home pay.  But this approach is extremely difficult to do and, as a proxy, Kline uses data on inflation-adjusted wage growth (essential the ratio of wage growth to inflation) as reported by the Pew Research Center writer Drew DeSilver in his article ‘For most U.S. workers, real wages have barely budged in decades’.

Below, is the original graph from DeSilver’s article clearly showing two things.  The first, contrary to what the caption says, these data are no independent measure of ‘purchasing power’; they simply are the inflation-adjusted average wages.  The second is exactly what both authors conclude, that wages over the past 50 years have fluctuated between $20 and $25 per hour (in real 2018 dollars) with no discernable trend and that the current level is the highest it has been since 1978.

The reason I object to DeSilver’s characterization that

”[p]urchasing power” refers to the amount of goods and services that can be bought per unit currency.

is that it is a gross simplification of what actually happens in an economy, even a simple one, let alone something as complex as a modern, multi-faceted one like we enjoy in the US.

Consider electronics.  A simple LED calculator in May of 1977 cost $40 and it could only do a few mathematical functions.  That same $40 would purchase over $165 worth of computing power today, which would easily secure a refurbished laptop or tablet with orders-of-magnitude more computing power.  The same could be said for entertainment, access to information, and durable goods and car purchases (factoring in quality and capability).   The purchasing power of a dollar for anything associated with the digital economy is the highest it has ever been.

Alternatively, the cost of higher education and health care have outpaced inflation by large amounts and so the wages of even the most highly compensated worker couldn’t match; larger and larger amounts of income need to be devoted to these two sectors.  But it isn’t correct to infer that the purchasing power in medicine is the lowest it has ever been.  The quality of medical care is so substantially greater than it was in 1978 that it is hard to determine just how better or worse off a worker is who earns the average wage.  Higher education is a different story entirely.

These types of problems plague the measure and characterization of inflation and so one must consume inflation adjustment calculations with some caution and not try to over generalize, as seems to be the case with Kline and DeSilver.

There is another objection to raised here.  DeSilver’s data shows average rather than median wages.  The distribution of wage is clearly skewed to higher values (nobody earns a negative wage but the upper wage is effectively unbounded) and, for such distributions, it is more appropriate to use the median.  It isn’t at all clear how different that measure is from the average as a function of time, although frequently-expressed concerns about rising income inequality tends to suggest that the gap is growing rather than shrinking.

DeSilver does try to provide some additional insight into the distribution by providing Pew Research Center data showing how various percentiles fared.

DeSilver cites the wages of the lowest tenth increasing at 3.0 % since 2000 (essentially at the rate of inflation – point returned to below) while the top tenth increased by 15.7%.  But again, there is no simple way to map these increases into changes in purchasing power since the analysis would have to look at the actual purchases members in each tier of the distribution.

These objections are important but the real objection to Kline’s analysis is in how time is treated.  In his anecdote, the raise in wages and rent happened at the same time.  But when looking over 50 years of data many factors enter into play that aren’t there for short-term analysis.

The first is that the type of work being performed in the US has evolved significantly in the last 50 years.  In large measure, manufacturing jobs have fled overseas and the lower skilled worker is now performing work that requires even lower skills and, perhaps, nets lower wages.  Additional factors include a rise in benefits being offered that offset set wage growth (DeSilver cites a 22.5% inflation-adjusted growth in benefits compared with the 5.3% growth in wages since 2001) and lagging educational attainment compared with other countries.

These considerations certainly play a role, but a central question not broached in either Kline’s piece or by DeSilver’s study is the speed in which an individual moves between percentiles.  Each point in the time series shown in each figure is a statistical snapshot of the economy in the year in question.  It is tempting but wrong to conclude that those who start in the lowest decile remain in the lowest decile.

Consider the following scenario.  Suppose that entry-level workers enter the economy each year and take their place for one year in the lowest tenth before moving upwards.  Suppose further, that each of them, on average, has just a basic skill set worthy of the modest wage that comes in being in this bottom decile.  Then the expectation we would have for the wage growth in this decile is simply to match inflation, nothing more.  The skill equity they earn is their greatest compensation since it enables them to ‘graduate up’ to more highly paying jobs.

This simple model, far more complex than the ‘excessively simplified example’ Kline uses to begin his piece, more than accounts for the data that DeSilver compiled.  But it is just a model.  The truth, no doubt, is for more nuanced and requires that we ask a lot more questions before forming a conclusion.

Betting on Incentives

The intersection between the legal world and the economic world is often very interesting.  The usual topic of debate centering on the deregulation or prohibition of some activity that is not criminal/immoral, per se, but rather permitted or banned for hosts of different reasons.  The tension between the two sides of the argument reveals a great deal about the self-interests roiling in the economy.  Who should get to sale coffins; what kind of a license, if any, should you have to work; How should the internet be regulated?

Now add to that list: just how should we safeguard sports from the taint of gambling?

On this, particular question, the Supreme Court has recently written a decision in the case 16-476 Murphy et al v. NCAA et al, basically declaring that The Professional and Amateur Sports Protection Act (PASPA) to be unconstitutional.  PASPA outlawed states “to sponsor, operate, advertise, promote, license, or authorize by law or compact” gambling on competitive sporting events.  The Supreme Court sided with the State of New Jersey (Murphy, who is Governor) that the law violated the Constitution’s anticommandeering principle (10th Amendment) by not outlawing sports gambling but rather proscribed the States from legalizing it.

Of course, the Federal Government couldn’t outlaw sports betting as four states already permitted such activities, the most prominent example being the State of Nevada and its glitzy mecca: Las Vegas, and so it simply outlawed any other states from opening the door.  The status quo may have persisted longer except for the rise of online sports betting.  As Richard Wolf points out in his USA TODAY piece Supreme Court strikes down ban on sports betting in victory for New Jersey,

What has made the [PASPA] law anachronistic is the advent and rapid growth of Internet gambling. Rather than stopping sports betting, it helped push more of it underground, creating a $150 billion annual industry. That dwarfs the $5 billion bet in Nevada, the lone state with a legal sports book that preceded the federal law.

With that much money on the line, it was no wonder that various states and municipalities wanted their hands in the boodle bag.  The key question is why did the National Collegiate Athletic Association (NCAA) (along with the big four mens professional sport leagues) bring multiple actions to block New Jersey’s desire to legalize sports betting?

The surface answer to this question is simple.  These businesses look to guard their interests in the integrity of the game, an interest that grew out of past scandals.  The most well-known example is the scandal of the Black Sox, in which members of the Chicago White Sox are accused of throwing the 1919 World Series in exchange for money from a gambling syndicate.  The most important example for the NCAA is the City College of New York point shaving scandal in 1951, in which numerous college players, across many teams, were implicated in fixing games for quick money.  Both of these scandals severely damaged public trust in organized sports, in turn damaging the marketability of the product these leagues offer.

But does this answer really hold water?  According to Supreme Court Ruling Favors Sports Betting by Adam Liptak and Kevin Draper of the NY Times

Officials across sports have for years complained that legalized wagering would lead to the corruption of their games through match-fixing, though there is no indication that is a realistic concern. Sports betting is legal and wildly popular in Britain, for example, but the integrity of the Premier League has not suffered. In fact, legalizing gambling allows companies and leagues to monitor gambling patterns and flag betting irregularities that could suggest corruption.

Liptak and Draper conjecture that:

The leagues and their teams long fought efforts to make it so, because, among other reasons, they were not assured of being able to directly tap into the new, vast revenue stream.

suggesting something akin to professional jealousy.

I think that the real answer, at least in the case of the NCAA, lies deeper in the fact that their economic viability depends on what is essentially an indentured workforce.  The reason that college players are interested in ‘quick money’ is that they are the primary talent for a multi-billion-dollar industry but share in very little of the benefits.

Sure, they receive tuition remittance (i.e. scholarship) from the institution that they attend and won’t be burdened with student debt for an education that may be worthless (as so many degrees seem to be these days) but that’s about it.  They have to work very hard for this perk.  Often, they have to risk their health in meeting their athletics obligations.  They don’t have the amounts of free time other students have and they are subjected to substantial limitations on their free speech and their pursuit of economic success.

I am not arguing that they should be given a better deal or that the institutional agreements are unfair.  These students have decided, for a variety reasons known only to them, to engage in a contract with the colleges and universities.  And it is true that some of them strike it rich after they’ve paid their dues as college players by joining the professional leagues; but the percentages are quite low and the promise of a fat payday will only keep a small fraction of the players in line.

Rather, I am simply arguing that the amateur athletes feel a lack of equity (in the economic sense) and can be tempted easily by gambling.  The cleanest way for the NCAA to protect the integrity of the game is to drop the fig-leaf illusion of student-athletes – an idea may have made sense decades ago when intercollegiate sports was more a tradition and far less a business – and pay the students.

A bit of reflection should show that PASPA never really protected collegiate sports as intended.  Any sufficiently industrious player can figure out a way to thwart the ban on sports gambling.  Maybe Johnny the star wide-receiver can’t bet but second cousin George can and how can we prevent them from conspiring to fix a game.  The better course of action is to provide Johnny with an economic incentive to play honestly; but doing so cuts into NCAA profits.  Rather the NCAA would rather have the government bear the cost of safeguarding the integrity of the sport – an unsportsmanlike attitude if there ever was one.

 

Cumbersome Cumberbatch

Ask anyone who views motion pictures as high art what they think about movies.  Very often, you will receive an exposition on the ‘special’ nature of the medium that allows it to rise above ‘mere entertainment’ to become a vehicle of social change.  Such films present a microcosm of society; a lens by which we can collectively self-examine and learn.

If it is true the stories featured in films can be microcosms for life, then it shouldn’t be a stretch to see that the business of Hollywood can also be a microcosm for the economy; or maybe more accurately, a petri dish where various strains of economic policies incubate, infect, and metastasize.  Case in point:  the recent declaration by Benedict Cumberbatch that he won’t takes roles in productions in which women are not paid equally.

Look at your quotas. Ask what women are being paid, and say: 'If she's not paid the same as the men, I'm not doing it.’

These sorts of announcements are common but what exactly is meant by ‘not paid the same as men’?  Before crafting a policy to address an economic wrong we need to define what the wrong is, how to recognize and measure before turning to a remediation and the question as to how economically viable it may be.

To start, recognize that work in Hollywood is done almost exclusively by contract, typically negotiated between a legal representative of the production and the actor’s agent.  Long gone are the days when actors were salaried to a studio and they got roles based on talent, availability, and the desire by the company to expand the expertise of its staff.  Each actor exercises his autonomy in deciding to accept a role.  Each actor tests the market demand with respect to his talent, exposure, popularity, and so on.  Each actor modifies his market supply based on interest, availability, strategic positioning, and the like.  A multitude of factors go into making the decision to sign on to a production or to let it pass by.

As result, it is incredibly complicated, if not down-right impossible, for all of these factors to be analyzed and controlled so that each actor is given exactly what he ‘deserves’.  Exactly how does one normalize actor salaries to make an equal-pay outcome?

To illustrate some of these complexities while keeping the scope manageable, let’s imagine a two-person movie with a male and a female lead.  There are several two-person films that have been made over the years (e.g. Sleuth) and one, in particular, nicely fits the bill:  the very disturbing movie Closet Land (click here for the full movie).  Closet Land features the chilling interrogation and torture of a children’s author (Victim played by Madeline Stowe) by a member of a totalitarian regime (Interrogator played by Alan Rickman) unhappy with her subversive stories.

How do we determine what metric to use to set the pay for the actors involved?  Should pay be based on how many lines are uttered, or by the total number of minutes the actor appears on-screen?  Maybe count the Twitter or Facebook subscribers for each actor, as a measure of fan-appeal, and figure that into the computation.  Don’t forget about the number of Oscar or Golden Globe nominations or awards.   If Closet Land is going to be a vehicle for social change then we want people to watch it – we want stars who will draw an audience into theaters.

The situation becomes even more muddled when we turn to even more subjective measures based on talent and emotional delivery.  Things like the character’s importance to the story or how evocative the actor portrayed a character’s death at a pivotal point in the plot somehow have to figure into the pay that the actor receives.  Who is more important to Closet Land; Victim, who, as a cruelly treated innocent, allows to identify with the terrible plight of the victims of totalitarianism, or Interrogator, whose brutish behavior drives home the horrors of life under such regimes?

Put all these messy considerations aside, for the sake of argument, and simply assume that there is an unambiguous way to determine the merited pay for a given actor based on some amalgam of all of the above.  Call this measure the actor’s worth.  Also call the pay that the actor is being offered by the production the actor’s pull.

What Mr. Cumberbatch wants to correct are those situations where the female lead’s worth is more than her pull when her male counterpart’s pull equals his worth.

Even with all these assumptions in place to strip much of the complexity and messiness away from this situation there are still a lot of nuances in an ‘inequitable’ situation that can’t be dismissed with the simple-minded “it’s because she’s a woman” response.

First, the actress may be starting out in the business so that her pull is less than her worth because nobody has had the chance to see what she can do.  Perhaps she’s been an amazing actor in stage productions on Broadway but is essentially unknown to the movie industry.  Regardless of the circumstances, information about her worth is unavailable to the market and the information cost to find out more is prohibitive.  In this case, her smaller pull reflects the market’s uncertainty.

Another scenario:  suppose that the actress is a recognized star, able to pull in more than her male counterpart, but is between contracts and willing to take a smaller deal to be in a movie that she believes can actually affect social change.

Yet another scenario: suppose the actress is a proven commodity but doesn’t quite fit what the production wants; maybe she is older than the role calls for.  She might want to negotiate her pull downwards below her worth to secure the role.

In all of these scenarios, and countless more, the economic freedom of the actress to negotiate her situation to her benefit, as judged by her and her alone, should be unfettered.  Note that in all of these scenarios there are factors under her control and others over which she can exercise little or no influence.  A naïve equal-pay-for-equal-work policy would produce barriers to entry that inflect harm on the actress.

Of course, a skeptical reader may be objecting vociferously at this point.  What Cumberbatch really meant wasn’t any of this.  Sure, it’s hard to quantify what an actor deserves, and sure an actor may want to make a deal for a variety of reasons, but what Benedict meant to address is when the male and female lead are identical in worth but the production company is full of misogynists.

Leaving aside the obvious critique against the simplistic mind that thinks that any two actors (or people) can have identical worth, suppose that the production house is misogynistic.  Is an equal-pay-for-equal work policy the best way to address the injustice?

The most intelligent approach centers on letting the free market do its job.  As is discussed in the following clip by Milton Friedman, companies that discriminate against a segment of the workforce ultimately bear a cost in the free-market and they don’t bear under equal-pay-for-equal-work policies.

So, what does one make of the statement by Benedict Cumberbatch.  Perhaps he has a clear picture that Hollywood isn’t a free market and that collusion abounds and that he hopes to right a wrong now that his star is on the rise.  Unfortunately, my own interpretation is a rather cynical one and is based on the paragraph that followed the quoted one above.  It’s a simple sentence that reads:

Cumberbatch hopes to enact this policy at his new production company, SunnyMarch, as well.

- Abigail Hess

Yep! It doesn’t take a Sherlock to suspect that all this virtue signaling is nothing more than an attempt to secure free advertising to a fledgling business.

Snobs versus Slobs

The title to this month’s blog is borrowed from that wonderful comedy from 1980 Caddyshack, which is itself an homage to the old commedia dell’arte style of theater.  The title seemed apt for two reasons.  First, the subject of this blog is the economics of theater art.  Second, the interest lies in the tension between the relatively new, digital medium of streaming, playing the role of the snobs and the more venerable method of motion pictures, or movies, if you prefer, as the snobs.

This particular instance in the classic confrontation between old and new pits the Cannes Film Festival against Netflix.

I’ve been a Netflix customer for nearly 20 years, having sign-up for a membership sometime in 1998 and I’ve always liked the idea of DVDs arriving in the mail with movies for what passes for the big screen in my home.  In the intervening years, Netflix evolved under its own success and moved away from ‘rentals’ and into streaming and generating their own content (as an aside, I question the logic of these moves, from a business standpoint, but it’s their business model).  As a result, Netflix now finances and makes their own motion pictures, which neatly brings us to the Cannes Film Festival.

The first Cannes Film Festival was held in 1946 and in the 70 some odd years since then it has grown in international standing and acclaim.  Movie makers from all over the world covet the Palme d’Or (gold palm) award.  The festival has accumulated a great deal of both financial and reputational capital making it a large player in the entertainment economy.

On the surface there seems to be a perfect match between Netflix and Cannes – a marriage made in media heaven.  On one hand, Netflix is making movies while, on the other hand, Cannes is seeking movies to exhibit.  What could go wrong?

Well, as the recent article from the NY Daily News (Netflix films can no longer compete at Cannes Film Festival, by Rachel Desantis 3/25/2018) reports, Cannes is unwilling to allow Netflix movies to take home that golden palm branch.  A key quote from the article makes Cannes position abundantly clear

The Netflix people loved the red carpet and would like to be present with other films. But they understand that the intransigence of their own model is now the opposite of ours.

-Thierry Fremaux, Cannes Film Festival

The use of the word ‘intransigence’ is most telling.  What kind of exact intransigence is Fremaux reacting to?  The fact that Netflix prefers to exclusively stream its content rather than show it in theaters.  As discussed in Desantis’s article

…Netflix attempted to screen the films in France for a few days ahead of the festival, but were unable to due to the country's "strict chronology laws."… Cannes now requires movies to have some sort of theatrical release in France, barring Netflix from eligibility.

In a nutshell, Netflix’s movie making may be compatible with Cannes’s role as exhibitor, but the rest of Netflix that also acts as distributor and theater owner is a direct threat to movie houses all across France and the rest of the world, for that matter.

There also seems to be a big government component thwarting Netflix in its chance to earn the Palme d’Or, as the ‘strict chronology laws’ listed in the piece seem to exist as a regulatory hurdle for alternative media or, perhaps, simply as a protection for French film regardless of the source.  Nonetheless, the presence of governmental protection lurking in the background is a bad sign.

Desantis’s article goes on to quote Fremaux as saying:

We have to take into account the existence of these powerful new players: Amazon, Netflix and maybe soon Apple," Fremaux said. "(But) Cinema (still) triumphs everywhere even in this golden age of series. The history of cinema and the history of the internet are two different things.

-Thierry Fremaux, Cannes Film Festival

What we see from Cannes is classic economic snobbery of the kind that spelled the undoing of Kodak (see How Kodak Went So Wrong).  Fremaux’s focus seems to be on the how:  record on photographic film, edit and post-produce, distribute to theaters, and have the masses watch together at proscribed times.  He should be focused on the what:  a good story told in a good way, regardless of how it is consumed.

Consider a similar exchange from Singing in the Rain (not only a fine musical but also a very funny and honest critique of the motion picture industry)

To drive this point home, imagine if you asked a friend what he thought about a novel and his response centered on criticism of the page size, the font, and the binding.  He then moves on to criticize audio versions of the same work since they fail to give the reader the same experience as turning pages.  At the end of this rant, you are still left asking ‘yes but how was the book!’, by which you mean ‘tell me about the story.’

I’m certain that behind closed doors at Cannes, Fremaux and his colleagues scoffed at the pretentious Netflix as the members of Monumental Pictures scoffed at the idea of ‘talkies’ in this clip (also from Singing in the Rain):

I wonder if Cannes will be laughing a decade from now.

Final Note: After the drafting of this post, Netflix racheted up the ante in their showdown with Cannes, by threatening to withdraw their movies.

Computing on The Shoulders of Wealth

Isaac Newton said in 1675 that “If I have seen further it is by standing on the shoulders of Giants.”   This quote, frequently viewed as a landmark philosophical statement about science and knowledge, is rarely, if ever, viewed as a statement of economics.  This omission is ironic since the entire enterprise of scientific discovery has been a major engine wealth generation, a fact that often escapes the modern mind, which focuses on the perceived (rather than actual) purity of the pursuit.  More importantly, Newton’s sentiment captures how a specialized economy empowers each of its members, although, perhaps, a better way of phrasing it would be “If I have accomplished much it is because I have millions standing behind me.”

In an earlier column (Division of Labor in November of 2016), I talked about how the division of labor in a specialized economy made it impossible for a single person to know how to manufacture the simplest of items: the conventional #2 pencil.  The aim of that article was to humble each of us with the complexity of the day-to-day interactions that surround us, especially those of us that favor central planning.

But there is a complementary point-of-view already, alluded to above, that is rather empowering and exhilarating.  This viewpoint says that behind each product in an economy stands millions of people who contributed in one way or another.  When we pick up even the most modest of things, there is a multitude of people being represented simply by that product’s existence.  That when we produce the smallest of things we are leveraging off of an immense knowledge base that we share with each other without really noticing.

To find in a case in point, look no further than our own cell phones and the apps that inhabit them.

I recently discovered the benefits of interval training, a physical exercise routine where one engages in high intensity exercise for a short time span followed by a short rest period and then the resumption of the activity.  The articles I read on this approach talked about how to transition from traditional cardiovascular routines, the numerous benefits, the number of repetitions, and so on.  None of them talked about the practical concerns of implementing such a routine, such as how to time the interval, and how not to lose count.

So being a do-it-yourself kind of person, I considered two immediate thoughts:  1) how hard could it be to code up an interval timer and 2) this would be a great project that would finally force me to learn how to code up an Android app.  As I continue to mull these two thoughts over, I began develop a plan.  I thought about the calls to the system clock to get the time, the sound effects needed to key the timer since one is usually engaged in the exercise and unable to look at the clock, and other related notions.  The plan to create this app involved all sorts of components and tools and approaches that millions of people had contributed to the economy as a whole and I reflected on the technological might each us taps into.  Not just the power associated with running a gadget – which all of us enjoy when we operate a car, a computer, a high-def TV, and related gear – but also the might associated with creating a new gadget out of existing ones.

As my thoughts continued along the ‘I can make a new shiny’ path, the more pragmatic side of my mind quipped into the conversation I was having with myself, with an innocent but insistent little point to check to see if someone else had already developed and supplied an app that fit the bill.  Of course, the answer was yes and several fine apps presented themselves after what a simple and cursory search.

The level of sophistication varied from app to app.  Some were bare-bones in their functionality, some had an immense number of bells and whistles.  But regardless of how they stacked up against each other, each app reflected a countless number of human hours spent researching, experimenting, developing, and refining how we interact with the world.

I’m not just referring to the computer resources used to develop the look-and-feel, the modules used to access the clock to mark when the interval begins and ends, or the functions used to produce the sounds that signal the exerciser when to start and stop.  I’m also talking about the engineers who designed the phone, the communications people who developed the protocol for the phone to talk to the wi-fi, the metallurgists who figured out refine the metals, the geologists who figured where to find them.  I’m talking about the business men who figured out how to marshal the capital needed to fund the research and development, the financiers who supplied the money, the accountants who tracked it all.

So, every time we use any tool, technique, or technology, there are literally millions of our fellow beings standing behind us, supporting us in our every move – very much inspiring isn’t it?

Signs of Rot

Milton Freidman often drove home the point that economic freedom was a necessary component to all the other freedoms, such as political and religious freedom.  The idea being that the freedom to choose is a vital element of being human and that economic freedom is the canary in the coal mine of all the freedoms.  It is the one that is easiest to curb or eliminate and so a society that curtails economic freedom is either on the way to curtailing other freedoms.

Of course, not all freedom results in good actions on the part of those who yield it.  The converse to Friedman’s point-of-view that economic freedom is required to have a good society is not true (nor would Friedman have espoused it) – just because you have economic freedom doesn’t mean that people will necessarily be wise in using it.

Labor unions and their interactions with management of both the steel and auto industries are excellent examples of stupid reasoning on both sides.  I personally saw, albeit from the outside as teenager, the excesses of the steel industry.  During the ‘fat’ times, union leadership secured all sorts of benefits from a management that was eager to make huge concessions on the perks each worker would have rather than rock the boat and risk a strike.  I knew steel-workers (fathers of my friends) who had 13 weeks of paid vacation each year, received a new pair of boots every two weeks, and who enjoyed other benefits that seem as extravagant to me now as they did then.

Similar excesses were rampant in the auto industry and, while there is no reason to go into details here, as they are amply documented elsewhere and in many places, the resulting unsustainability led to the movement of these jobs from the ‘rust belt’ and into the right-to-work states all throughout the south.  One need only look at the hollowing out of Detroit to appreciate the catastrophe that occurs when freedoms are abused.

Both sides of the equation, management and labor, thought little of sustainability and the long-term impact their actions would have.  They were living high on the hog and didn’t see a reason that the party would ever end.

I see a lot of similarities between these sad, old episodes and the current culture surrounding higher education.  Case in point is the news that accompanied the resignation of the president of Michigan State University amid the Larry Nassar scandal.

Much of the abuse attributed to Nassar occurred around the MSU training facility and, for better or worse, President Lou Ann Simon was pressured or felt that she needed to step down.  I am not interested in picking over the scandals, lurid details, and rumors and I only bring it up as the confluence of these events allow a glimpse into the severance packages afforded executives in big edu (we have big tobacco, big oil, and big pharma – why not big edu).

According to an article entitled Contract for departing MSU president includes faculty job, lifetime perks by Elizabeth Joseph and Joe Sterling of CNN, fabulous benefits will redound to Dr. Simon upon her resignation.  She will continue to make her three-quarters-of-a-million dollar salary for one year and over $550k for each year following, even though she will be in a rank-and-file academic role; she’ll have her health care covered for life; and she’ll get additional perks that make you shake your head.  And all of this under the umbrella of a public institution primarily funded by the tax-payers of Michigan.

Let’s take a step back and remind the reader of some of the more interesting facts about big edu.  First is the cost of tuition.  The figure below shows the average tuition costs (according to the College Board) from 1970 to 2017 in actual year dollars (i.e. raw dollars paid in the corresponding year not adjusted for inflation).

A rough fit of the data suggests that private tuition costs have risen by at least 5.2%, compounded annually, over that time period.  More surprising is the fact that public tuition costs have risen faster, by approximately 6.3%, over the same span.

A more careful analysis over the smaller time span from 2000-2017, a period in which inflation has held steady, on an annual basis, at around 2%, produces the estimates that private and public tuition have grown at rates of about 4.4% and 4.9%, respectively.  These are over twice the rate of inflation and it is not at all clear (as both a parent of college-aged children and a teacher of college courses) that the quality delivered during this time period follows this incredible rise cost.

So if the extra costs are not representative of value-added just where is all this money going and, equally troubling, where does all the money come from?

The answer to the second question is perhaps easier to fathom.  It seems that we have developed a two-headed beast of high demand and easy money.  The current system convinces graduating high-school students that college is the only option; four years of fun and intellectual stimulation will assuredly-lead to the good life; ignore the monumental student debt that you will acquire.  Advertising, in all its myriad guises, emphasizes the glamour of college life.  And a national program of student loans makes money easy to get and easy to spend.

On the other side of the equation, universities almost never say no to federal dollars (there are some that do but, on a percentage basis, they essentially add up to zero).   And, if the money that is pouring in doesn’t go to improving the educational outcomes in any substantial way, then where does it go? Well, to new administrators, student liaisons, shiny new facilities and the like.  None of these ‘improvements’ have a data-driven rationale or justification, no hard studies bear out their impact into student outcomes but the money has to be spent somehow so why not spend it somehow.  (Note that I leave of the alarming trend that more and more students are being taught by adjunct professors; this is just bitter icing on bad cake).

So in the end, I am forced to conclude that, although the residents of the ivory tower will take umbrage at the comparison, they are just as stupid, short-sited, and greedy as the steel-working fools I grew up around.  Part of me hopes I live to see the day when the bubble bursts; part of me hopes I will be comfortably and cozily dead before it does.

 

 

The Worst Economic Myths

New Years:  a time to reflect; to look back and to look forward; to greet the future with hope; to make resolutions to better one’s life.  In short, starting the new year with a new perspective.  And perhaps no other mode of thought needs a new perspective as much as common beliefs about economics do.  So, in keeping with the theme of this month’s issue of Blog Wyrm, this article will look at the top 5 myths or misconceptions that still plague the how we think about the dismal science.

5.  Wealth is a Zero Sum Game

It is all too common to hear even highly-educated people claim that the wealth of the rich was obtained and is built on the backs of the poor.  And, unfortunately, there are some cases where this is actually true – isolated and small cases when measured against all of human progress – but, nonetheless still there.  But, as discussed in the earlier post A Provocative Question, it is an unsupportable position that all wealth is based on zero sum game.  Additional proof, beyond the mere standard of living increases discussed in that post are the fact that these improvements are enjoyed mostly world-wide.  Consider, Ukrainian hackers who hold your computer hostage, Nigerian scammers who go on phishing expeditions, or South Korean gamers who ‘pwn a noob’ in Call of Duty.  None of these characters can exist in the world without having access to computers, electricity, and a culture of specialized labor that allows them to hone their craft.  It is undeniable that wealth is created.

 4. Things Have Unique Values

Closely related to the zero sum fallacy, another puzzling misconception that is widely held is the notion that a good, service, or commodity has a fixed value that can be assigned by the market or that is agreed upon in a transaction.  As pointed out the in the previous posts Value and Trade and Candy and Wealth, value is strictly in the eye of the beholder.  In any type of transaction, financially-based or strictly bartered, each side gains from the transaction.  This must hold, else the transaction would never happen.  When party A agrees to pay $10 for an item and party B agrees to take the money in exchange, both sides or gaining (or, at least, perceive that they are gaining).  That means that party A values the item greater than the $10 he is willing to pay just as party B values the $10 more than the item he just delivered.  The prices found in any market, for example a grocery store, are simply the shorthand for the seller saying that he values the monetary value listed more (not equal to) the item in question.  The item for sell must be substantially less in his eyes than the price listed, else why would he go to the trouble of displaying, listing, and ultimately exchanging it.

3. Any Activity is Good for the Economy

It is often said by people either citing John Maynard Keynes or by those purporting to be Keynesians that any economic activity is good.  They will say things like: ‘In a time of crisis, get people digging ditches even if you have them filling them back up again.’  This myth is brilliantly debunked by Frederic Bastiat in his refutation of the Broken Window Fallacy.  As is often the case, subtle logic, such as Bastiat employs, is often easier to understand when carried to an extreme as is done in the post Save the Economy: Nuke a City.  There it is argued that if destruction were really embraceable as a valid economic strategy, then any society should always be destroying large swaths of property in order to ‘stir the pot’.  Interestingly, many of those same ditch-digging proponents who think arbitrary activity equates to an economic stimulus are the same people who are most vocal about the wastes of war.

2. Licensing Protects

This myth is so pervasive, insidious, and just plain wrong it is hard to even know where to begin.  Taken almost as a doctrine of religious faith, most people believe that licensing by the state protects consumers.  But, as a critical examination of Airbnb, ebay, or any another of the myriad services offered in the internet reveals, human interaction can be dependable done with a minimum of government oversite.  And while these endeavors aren’t perfect can an honest person say that he is somehow worse off engaging these goods and services than he would be from those offered in the highly-regulated sectors?  Compare the rider experience of Lyft to that of conventionally licensed taxi-cabs.  Does the licensing really benefit the consumer? (See Medallions for Freedom? for a look at just what licensing does to the taxi-cab owner.)  More often than not, licensing and regulation is used as a barrier to entry that protects entrenched interests that lobby the government from new-found competition.  Just ask the monks of Covington Louisiana who were prohibited from selling coffins that undercut the ridiculous profits of funeral homes (Of Monks and Coffins).  And finally, there is ample evidence that lawmakers using licensing for their own ends, ensuring that their interests are protected from scrutiny while keeping all other out (Gunless and Gunrunners).

1. Socialism Works

Closing out this list is a myth, whose scope and damage dwarfs all the others by leaps and bounds: the myth that socialism works.  One would think that all the required evidence to debunk this myth is available just by looking at the failure of the USSR, Cuba, or Venezuela.  Unfortunately, the misery of the hundreds of millions of people who have suffered and died under this economic scheme over the past century or so is a dark testament to just how wrong, and yet how eternally appealing, this idea is.  Unfortunately, its adherents are willing to look past the enormous body of empirical evidence offered by these failed states and the experiential accounts of the victims and survivors.  Neither are they swayed by historical analysis offered by those who experimented with the system (Free Riders on the Mayflower) nor by the body theoretical analyses in behavioral economics (e.g see Free Electric Riders, An Ultimatum You Can’t Refuse, or The Gravity of a Minimum Wage).  Just what will convince people that this myth is dangerous and should be shunned is beyond me – but I hope we find an answer soon.

Meet the New Normal, Same as the Old Normal

With all due apologies to The Who, the title of this post is inspired by the same cynical sentiment woven throughout and aptly summarized in the closing line of their classic song Won’t Get Fooled Again.  My cynicism is directed at the academic and professional economists who seem to ignore the breadth of US economic history (and human nature) and argue that, somehow, we are now in a unique situation not seen anywhere in the tumultuous upheavals of the past.  That the store of US ingenuity and invention has run its course and we, and perhaps the entire world, are stuck at new normal of low economic growth.  To these useful idiots I rejoin one of the wisest lines from the same tune ‘And the world looks just the same and history ain’t changed’.

To set the stage for my jaded view on these vapid practitioners of the dismal science, let’s take a look at a bit of economic data.  A simple visit to the Bureau of Economic Analysis is all it takes to get gross domestic product (GDP) figures in Excel from the years 1929 to 2016.  Yep, one URL, one click, and one download is all it takes.

I’ll confine myself to the first three columns containing year, GDP in billions of current dollars, and GDP in billions of chained 2009 dollars.  The year column is self-explanatory but the other two are worth reviewing.  The second column (GDP in current dollars) is the BEA’s best estimate, to the nearest 0.1 of a billion dollars, of the monetary value of all the goods and services produced within the US in a given year, expressed in that year’s dollars.  The third column (GDP in chained 2009 dollars) contains the BEA’s best estimate as to the value inflation-adjusted to 2009.  I can’t speak to why 2009 is chosen as the anchor.

When graphed, the GDP values, which show a general upward trend, also reveal the scars inflicted on the economy over these past 88 years.

The Great Depression’s presence is clear in the minimum in the GDP in 1933.  The wartime bump in 1941-1945, the post-war let down, the stagflation of the mid-seventies, the recession in 1991 are all noticeable.  But few features are as pronounced as the onset of the Great Recession and the subsequent lower rate of growth.

GDP growth is defined by taking the difference in GDP’s between two subsequent years and dividing by the earlier of the two.   A plot of the resulting percentages,

while open to a wide range of interpretations, looks like a system that is settling to a tighter operating range after a rather turbulent start.  The horizontal line represents the average value of 3.34 over the 87-year period.

A zoom into ‘recent’ history

shows a pronounced drop in annual growth in the post-Great Recession recovery.  Quantitatively, the average GDP growth over the time span from 1970-2007 (inclusive) was 3.09 percent.  This time frame includes the stagflation of the mid-seventies (runaway inflation and high unemployment), the 1987 Stock Market crash, the 1991 recession, and the 2001 tech bubble collapse as well as the Reagan recovery, and the ‘new digital economy’ of the mid-nineties.  The average GDP growth over the time span from 2010-2016 has been 2.13 percent, nearly a full percentage point lower than the short-trend average and 1.2% lower than the full long-term trend.  If the time span from 1929-2007 is considered, the average GDP growth is 3.58 percent (even including the Great Depression), making the recent sub-par growth even worse.

All sorts of theories have been concocted by the intelligencia to justify this ‘new normal’.  For example, John Fernald, in his October 11, 2016 article entitled What Is the New Normal for U.S. Growth?, states:

Estimates suggest the new normal for U.S. GDP growth has dropped to between 1½ and 1¾%, noticeably slower than the typical postwar pace. The slowdown stems mainly from demographics and educational attainment. As baby boomers retire, employment growth shrinks. And educational attainment of the workforce has plateaued, reducing its contribution to productivity growth through labor quality.

He further argues that ‘the new normal pace for GDP growth…might plausibly fall in the range of 1½ to 1¾ %...based on trends in demographics, education, and productivity.’

David Houle offers a slightly different interpretation.  In his article Low inflation and GDP growth is the new normal, dated August 15 2016, Houle lays the blame at the feet of:

…globalization and technologically induced trends and realities, making historical economic comparisons less relevant.

In the regular column Buttonwood’s notebook, published in The Economist June 2, 2016, the author had this to say

This year's approach describes the global outlook as "stable but not secure". In essence, Pimco thinks that the "new normal" will continue, with the US managing 1.5-2% growth (at or above trend, in its view), Europe managing 1-1.5% and China at 5-6%.

Leigh Buchanan’s piece entitled Report: 3 Percent U.S. GDP Growth Rate Is Unrealistic, published in Inc. on May 19, 2017, in which she relies heavily on the opinion of Marc Goldwein, senior policy director at the Committee for a Responsible Federal Budget.  Buchanan closes her piece in dramatic fashion by saying

"The bottom line," said Goldwein, "is we should not be buying magic beans. Three percent growth is not completely impossible. But it would be a heroic feat to get there."

And finally, rounding out our sample, is Mark Thoma’s What if slow economic growth is the new normal? (CBS News’ MoneyWatch on September 19, 2016), wherein he cites Robert Gordon’s explanation for this historical suppression of growth in the US, saying it is that

the country has entered an era where productivity growth will be much lower than in the past. According to Gordon, the digital revolution now underway is much less important than inventions that came about between 1870 and 1970 such as electricity, sanitation, chemicals, pharmaceuticals, transportation systems (the internal combustion engine in particular) and communication.

Of course, there were many, many more jumping on the ‘new normal’ bandwagon.

Rare (and unheeded) was the voice speaking that 3 percent growth was again realistically possible.

But here we are, with revised estimates for 2017 Q2 and Q3, showing two successive quarters with GDP growth over 3%, and that trend seems to be continuing into the fourth quarter.  I won’t try to guess what policies have changed but clearly something has.  And, for this analysis, it isn’t actually important to figure out what the change is.  It is simply worth pointing out that all of these prognosticators and economic pundits were wrong – dead wrong.

Human economy is a complex thing with an immense number of moving parts.  It is the height of arrogance to somehow label ‘now’ as extraordinary compared to ‘then’.   By what measure do you say that the transition from pre- to post-WWII US involvement was less ‘globalizing’ than the current trends now?  By what technological yard-stick do you argue that the advances from 1862 (the final publication of Maxwell’s equations) to 1929 (Solvay conference) are more important than the progress from 1990 (dawn of the internet) to 2017 (bio-engineering, quantum entanglement)?  Has human nature changed recently so that the period from 2009 to 2016 must be analyzed differently than the periods before (conveniently all lumped in as the ‘old normal’)?

You may be wondering why, then, did these pundits jump on the ‘new normal’ bandwagon in the first place.  There are looks of possible reasons.  No doubt some of them wanted to be seen as wise and erudite, others succumbed to the all-too-human tendency to view the current age as somehow harder that all others (‘in my day…’), some had an agenda for talking the economy down and some wanted to avoid controversy by criticizing the policies of the time.  And, as interesting as it may be to diagnose why they said what they said, the real question is why did any of us listen to them.  Maybe next time we won’t be fooled again.

Great versus Great

Well, it’s just about a decade since the Great Recession began and I suppose it is natural to look back and reflect on both the pain and the promises of that tumultuous time in US history.  Fueled by very irresponsible lending/borrowing behavior, the economic downturn is traced by the National Bureau of Economic research (the entity who is, according to the Wikipedia article, the official arbiter of US recessions) to state that it began in late 2007 and extended deeply into 2009, leading to 19 months of negative growth.

Widely regarded as the worst global economic crisis since the Great Depression, the Great Recession has elicited quite different responses depending on the responder's point of view.

For some, the depth of the Great Recession was mild in comparison to the Great Depression.  Peak unemployment rates during the latter reached 25% compared to 10% sometime around October of 2009 (according to the Bureau of Labor Statistics or BLS).  Misery and poverty were commonplace in the Great Depression on a scale that far outweighed anything seen before or since.

For others, the Great Recession brought real and lasting pain in the here and now; pain all the more exacerbated by the fact that the Great Depression ushered in a whole spate of policy techniques designed to help political leaders and economists avoid these sorts of downturns and to blunt their effects.  In particular, payroll employment dropped far more sharply than in the previous 5 recessions and persisted in this weakened state.

The previous graphic was taken from The Recession of 2007-2009, a fascinating (and sobering) summary of the Great Recession by the BLS that drives home just how devastating that period in US history was.

Nonetheless, the general impression is that, as a whole, the US is better off now than it had been in the aftermath of the Great Depression.   But, as pointed out in the article We’re About to Fall Behind the Great Depression by David Leonhardt, the recovery from the Great Recession has languished in comparison to its much-worse predecessor.

Leonhardt based he’s claim on a graphic produced by Olivier Blanchard and Larry Summers that shows the GDP per capita for adults aged 18 to 64 scaled by the number of years since the onset of the economic crisis.

Blanchard and Summers base their onset of the Great Depression with the Stock Market Crash of 1929 but there is some dispute over whether this event caused the Great Depression or whether subsequent policies turn a market correction into a full-blown catastrophe.  Nonetheless, the initial dip in GDP per capita from this starting point was a great deal more pronounced (gray line) than the dip after the onset of the Great Recession (gold line).  And the loss of relative wealth was also more severe in the Great Depression (year 4) than in its smaller counterpart.  But the overall recovery was more pronounced by the twelfth year than the recovery of their prediction of where the trajectory of post-2009 US is headed.

Blanchard’s and Summers’s chartsmenship (or at least the NYT’s reproduction) leaves a lot to be desired and surely they’ve engineered the presentation of the data to obscure the fact that the US involvement in World War II began at the twelve-year mark.  Nonetheless, I believe their basic message is correct.  The US recovery since the end of the Great Recession has been anemic at best.

There are a host of reasons but they all fall under one broad umbrella – government policy and regulation during the aftermath of the Great Recession.

As is widely discussed, the US has one of the highest corporate tax rates in the world.  In addition, the US taxes companies who earn a profit overseas; they are taxed by the country in which they earn their profit and then are taxed a second time by the US if the company tries to bring that profit back into the US.  This has caused many companies to keep their profit invested in the countries in which they earned it and out of the investment markets here at home.

In addition, the regulatory structure has increased to unbelievably burdensome levels.   As discussed in a previous post (Haircuts and Wine), the single biggest obstacle to small business creation and subsequent innovation is the compliance burden visited on these entrepreneurs.  Since the bulk of small business profit and investment is done here in the US, regulatory hurdles and higher tax rates offer a double-whammy to increasing economic growth.

Another insidious and perverse aspect of federal policy is the growth of government largesse and the increasing incentives for people to get on the public dole and hang out.  Having a larger percentage of the population taking rather than giving hurts not only their dignity but the morale of those producing that which the takers are consuming.  Thus there is a two-fold hit to productivity – the loss of all the capability of those standing on the sidelines and the loss of motivation by those still in the game.

So, even though I think Blanchard and Summers slanted the presentation a bit with their graphic, if it serves as a rallying point for improving the economy then so much the better.

Oil Prices and the Efficiency Paradox

In my last column, I discussed how improved efficiency can enable consumption and increase demand rather than lead to conservation.  This so-called counter-intuitive behavior is codified by economists under the headings of the Khazzoom-Brookes Postulate and the Jevons Paradox.  In this column, I’ll turn from the general theory and look at these ideas in practice by analyzing oil consumption and associated prices/costs in the United States for the years 1949 through 2016.

But before diving into the numbers and performing a statistical analysis, I would like to take a small tangent to discuss the efficiency paradox and the apparent surprise and controversy it causes in economic circles.  This last assessment is based on the commentary surrounding the common literature associated with the Khazzoom-Brookes Postulate and the Jevons Paradox.

Personally, I am perplexed that some economists find anything unusual in the notion that increased efficiency leads to wider consumption of a good or a wider adoption of a technology.  Recent history is littered with examples.  Take desk-top printing.  Paper is a valuable resource; there is and should be tremendous pressure to properly use and steward the trees from which it is derived.  Paper was far more expensive (in adjusted dollars) and scarce in the 1970s than it is today.  One can barely go anywhere without finding reams of printer paper on sale; local grocery stores usually have an aisle devoted to office supplies (and some convenience stores as well).  Electronic media, like PDF, even obviate the need for it and yet there is far more paper circulating per capita today than 40 years ago.  Why? Because its value has far outstripped its cost despite the increasingly more efficient ways of producing it.  The reason for this is that efficient production of paper and printers has opened avenues for use that were effectively closed to all but a handful over a generation ago.  Professional printers were the only ones that could economically make attractive documents and signs back in the day; now anyone can.  The efficient use of paper has enabled a whole new way of enjoying its benefits and has, concomitantly, increased the demand.

The interesting and more far-reaching question is whether oil consumption reflects the ‘efficiency paradox’ as well.  There are plausible arguments that it should, based on the following analysis.

For the sake of argument, suppose that, on average, every person in the United States drove 100 miles each week.  Further suppose that, on average, fuel efficiency was 10 miles per gallon and the price of each gallon was $4.00.  Then the average demand for gasoline would be 10 gallons per capita per week.  Finally suppose that there is an increase in fuel efficiency to 11.1 miles per gallon.  We want to look at the possible responses to such a change.  There are three basic ones.

In the first scenario, the average motorist, content with his 100-mile/week habit will buy 9 gallons of gasoline and will take the $4.00 he saves and direct it to other goods and services.  In the second scenario, our average motorist says to himself that he’s already used to spending $40/week on gasoline and so he’ll buy the same dollar amount of gasoline (10 gallons) but he’ll drive a bit more for convenience or fun.  In the third scenario, our motorist may think that since gasoline prices have come down in a relative sense – the cost per mile he bears is now less – he will finally take those weekend trips to the beach he’s always dreamed about and he steps up his consumption to 12 gallons a week.

A bit of reflection on the above scenarios should drive us to two conclusions.  First, very few people are likely to fall into the first scenario.  All of us dream of doing more than scarcity allows.  Second, the above analysis assumes that the only thing that changes is the behavior of the average motorist. Two significant drivers are left out:  1) the response of the oil producers to the changes in fuel efficiency and 2) changes in the motorist population.   It is very unlikely that the oil producers will do nothing in the face of increasing fuel efficiency.  Not wanting to risk a drop in demand they are likely to down-adjust the price as that is easier to do compared to down-adjusting the supply.  The latter course of action requires laying off workers, scaling back factory production, and curtailing distribution all along the supply chain.  It is also very unlikely that the average motorist can be thought of as anything other than a statistical snapshot in time.  People are born, grow, age, and die.  Tastes, habits, and behaviors change.  And the number of people in the population grows as a function of time.  These later factors make it far more difficult to analyze real-world data like oil consumption.

To that end, I pulled 4 sets of data to try to see the Jevons Paradox in action.  These were:  1) total oil consumption by day (thousands of barrels), 2) average raw cost per barrel in dollars, 3) US population by year, and 4) GDP by year (raw not inflation-adjusted).  The sources are:  Item 1, Item 2 (both from www.eia.gov), Item 3, and Item 4 (both from www.multpl.com).

The first graph of the data shows a comparison of total oil consumption and per capita use over the time span from 1949-2016.

With some minor variations, total oil consumption has risen steadily, except for three periods: 1) the aftermath of the oil shocks in the mid-1970s, 2) the breaking of stagflation in the early 1980s, and 3) the Great Recession of 2008-2010.  Interestingly the total oil consumption per capita remained flat from about 1982 to 2008, a 26-year period where, while fuel standards were getting continuously better, oil demand per capita remained steady.   This is direct evidence of the Jevons Paradox and it indicates that the vast majority of us fall into scenario 2 – we’ve allocated a certain amount to spend on gasoline and we drive appropriately so that we consume roughly that amount on average, regardless of how many extra miles we record.

Further support for the idea that most consumers fall into scenario 2  is found in the following plot of total oil costs as a fraction of GDP, which is a reasonable measure for judging the relative cost of oil in the economy.

Note that oil prices held fairly steady (between 2-4% GDP) through much of that 26-year time span where oil consumption per capita remained constant.  Had the majority of us been scenario-3 types, the additional relative price reduction (efficiency and actual price reduction) would have spurred a rise in per capita consumption.