Problem-solving algorithm with simple questions.

Froeb’s three questions for diagnosing how to improve decision-making in an organization are surprisingly hard to use and #2 tends to cause the most problems.  I’ve also added the implicit fourth step too.

  1. Who made the bad decision?  Responsibility?
  2. Was there an information problem?  Beware the hindsight fallacy explained below. 
    1. Did the decision maker commit a fallacy and misuse or ignore useful information?
    1. Did the organizational structure keep useful information away from the decision maker?
  3. Did the decision maker have good incentives?
  4. Solutions: How do we change the three dimensions above to get better decisions? 

Algorithm question #1: Responsibility

I am often struck by how hard this simple question is for many students to answer.  People often want to be vague because of a desire to avoid blaming anyone, but if there is going to be any hope of change, the person or deliberative body that had responsibility=power for the decision must be identified. 

This algorithm only works at an individual level, so if it is used to analyze a group decision like a board of directors, then it has to be repeated on enough members to change the outcome of the vote.

One of the reasons Americans dislike our national politicians is that it is nearly impossible to hold any of them accountable in our political system.  For example, Trump’s signature policy proposal in 2016 was to build a big, beautiful wall on the Mexican border.  Who gets the credit/blame for the amount of wall that was built?  The House, Senate, President, state and local governments, or courts?  They all share some part of the responsibility for how much wall building gets done on the border and it is really hard to know who to blame (or give credit to) for what happened.  Nobody cared that Trump didn’t get Mexico to pay for it because nobody ever expects American politicians to follow through because politicians always just blame their failures on obstruction by opposing politicians and many campaign promises are more about theater than realpolitik. 

In contrast, in a typical parliamentary democracy, there is nothing but the courts to hold back the ruling party and voters expect that politicians to follow through with what they say they will do.   If British Prime Minister Boris Johnson doesn’t follow through on a campaign promise, he cannot blame anyone but himself because there are almost no checks on the ruling party’s power.  Corporations are even more straightforward in holding employees accountable. 

Being vague about who is responsible for making a bad decision can be good politics for both executives and politicians, but it makes it impossible to fix decision making problems. 

Algorithm question #2: Information

The problem is how to know who has the wisdom to make wise decisionmakers and managers are by far the hardest people to manage.  Fortunately, a true information problem is exceedingly rare because it can only be due to the misuse of available information.  There are two types of information misuse:

  1. A fallacy: This is a mistake in how someone used available information.
  2. An organizational structure problem that prevented the decisionmaker from getting available information that is needed. 

A fallacy isn’t caused by a lack of accurate information, it is caused by faulty reasoning (or irrationality) that misuses the available information.  Froeb is extremely misleading when he asks, “did the decision maker have enough information…?”  because he is asking this question in hindsight, and in hindsight we always feel like we didn’t have enough information.  If we had perfect information about the future, we could avoid all regret, but that is impossible. 

Froeb’s framing of question #2 tends to mislead people because there is a natural human tendency to fall prey to the historian’s fallacy (or hindsight fallacy) when thinking about past information problems. This happens because after a problem has happened and we know how it ends, we can’t think about the problem without the ending coloring everything about it.  It is like the way that watching a movie twice in a row is a totally different experience the second time.  It is hard to avoid erroneously assuming the existence of information that wasn’t available at the time a bad decision was made.  For example, the victims who died in the World Trade Center on September 11 did not make a bad decision to go to work that day due to an information problem because they had no way of obtaining information about the terrorist attack before it happened. There can only be an information problem if it would have been feasible to get someone with better information to help make the decision.  If there was no improvement was feasible, then it wasn’t a bad decision.  You might as well say that gravity made a bad decision to collapse the twin towers.  Thinking that a bad outcome necessarily means that there was a bad decision is an example of the hindsight fallacy and leads to bad management.  Sometimes very good things happen because of very bad decisions and sometimes very bad things happen despite all the best decisions being made.  

People make lots of mistakes because of lacking the correct information, but if they did not have the correct information when they made the bad decision, then they didn’t fall into a fallacy.  As Froeb says, if there was no feasible solution, there was no problem!  So if the decision makers didn’t have some information that later turns out to be crucial, then they can’t be blamed for not acting on it.

Algorithm question #3: Incentives

Most of the book focuses on selfish pecuniary incentives and its strength is in how to use them well.  Unfortunately, the book neglects crucial non-monetary incentives like inherent human drive to

  • find meaning in what we do.
  • be good people.
  • have friends.
  • seek beauty.
  • become excellent at something that we do.

Within organizations, non-monetary incentives are more important than monetary incentives for guiding day-to-day activities and monetary incentives are dangerous because they often create unintended consequences.  For example, in many of Froeb’s case studies, workers make bad decisions because they are too selfishly focused on monetary incentives.  Organizations where workers have more intrinsic motivation to do good work could avoid those problems, and it is important to build an organizational culture where people generally want to take pride in what they contribute. 

The culture of an organization is extremely important for shaping the norms of which incentives are emphasized and which are not and that is not in the book at all.  It is also the most important part of leadership.  Leading by example and inspiration is one way to change a culture and punishing workers for being selfish at the expense of the organization is another way, but changing organizational norms and culture is a topic that comes up in leadership classes rather than in economics, so you should be critiquing Froeb’s peculiar view that humans are inherently selfish and obsessed with money above other motivations as you read the book.

An organization’s budget communicates priorities which demonstrate the organization’s values.  The budget is a way of creating organizational culture and culture creates intrinsic incentives.  

Extrinsic incentives are difficult to design because of what computer scientists call the alignment problem.  This is one of the dangers of artificial intelligence that any incentives programmers give to AI may have surprising unintended consequences.  For example, if we program robots to have a primary motivation to prevent humans from being harmed, then they might stop us from ever driving or crossing a road because of the risk of harm.  Even when programmers can play God and design an AI with custom-built motivations, it can still lead to disastrous outcomes

Fortunately, humans are better at achieving alignment with each other because we have an intrinsic desire for alignment with other humans.  We call it friendship, love or even soul mates.  Everyone realizes that money cannot buy love, so it should not be surprising that it cannot buy true alignment in an organization either. 

Solutions:  Changing the organizational structure or incentives to get better decisions.

The point of the algorithm is to change the organization so that decision-makers will perform better in the future.   There are three common ways that correspond with the three questions:

  1. Can a different person be given the responsibility for making the decision (because someone else has better information and/or incentives)?
  2. Can the decisionmaker be taught a better algorithm for making decisions that will avoid their fallacious thinking?
  3. Can the incentives for the current decision-maker be improved?  Don’t just consider pecuniary incentives and beware of unintended consequences.  Every change in incentives has tradeoffs because it must shift resources away from something else the organization had been prioritizing.  Examine the organizational culture and what incentives that are valorized and what are de-emphasized in the culture.    

In my experience, most solutions involve incentives (question #3) and secondly organizational structure (changing the answer to question #1).  Information problems (question #2) rarely yield solutions because it is hard to avoid fallacies as explained above.  So the bad news about information problems is that they are rarely solvable.  The good news about them is that they are very rare compared with incentive problems. 

Froeb focuses on changing pecuniary incentives, but an organization’s culture produces powerful intrinsic motivation and changing the organizational culture can be difficult, but it is a more ideal way to solve a decision-making problem if it can be accomplished.  Organizational culture is determined by the norms and ethics of a group.  If an organization is truly motivated by its mission, that eliminates the incentive problems that cause most of the problems in Froeb’s anecdotes.  For example, in several of Froeb’s anecdotes, there are organizations that have problems because of managers and/or other employees who are lying and cheating to game the compensation system and reap selfish rewards.  Froeb sees this kind of behavior as a part of human nature because he is a believer in the rat-actor paradigm, so he blames bad incentives rather than bad values, but when people are willing to lie and cheat, there are no pecuniary incentives that work well and it is probably best to send a clear signal that that kind of behavior is not acceptable by firing the offender.  Getting rid of egregious behavior helps restore a culture of honesty and teamwork for the remaining employees.  A cheating coworker is toxic and most organizations immediately fire people who mislead others in order to cheat the organization. 

Forgiveness can also work if there is repentance and restoration, but that is beyond the scope of this class.  Changing organizational culture is a key competency of leadership and this class hardly touches on it, but many other classes focus on these issues and I encourage you to bring your knowledge of leadership to bear in thinking about how to shape intrinsic incentives and organizational culture in solving decision making problems rather than just pecuniary incentives.