In a previous article I used mathematical concepts (but with no math, I promise!) to explain how unconscious bias is a super-power in our brain’s neural architecture; how it can get us into trouble when we have either wrong or sparse information; and how this can lead us to discriminate in suboptimal ways when it comes to inferring things regarding individuals from demographics from which we either have sparse or stereotyped information. And while I suggested some actions we can take to reduce the sparsity of our information (improving the optimality of our inferences), the general take-away of the article was that our brain’s were just doing their thing – nothing to feel ashamed about, and equally applicable to everyone.
That’s part of the story. But it’s certainly not the whole story.
So, in this second article, I want to expand a bit on the concept of acting vs. perceiving. I’m going to have to bring another math idea to the table to join my beloved Kalman Filter, so please join me in welcoming Linear Quadratic Gaussians1, which we’ll affectionately call LQG for short.
LQG tells us the best action we should take, given 1) the information that we have (both about causes and effects, and about the state of things now and in the past), and 2) the things that we care about. In the last article we homed in on the state of things now and in the past, and for the sake if this paper, it’s not too large a leap to extend this notion to causes and effects. Like Kalman Filters, LQG doesn’t just think about what things are, it thinks about how confident they are (e.g., if I’m hungry I should catch a fish, but I know I’m not that confident that fishing will result in me catching a fish, vs. I’m pretty confident if I chose to act instead by going to the grocery store, there’s a good chance I can buy a carrot.) The part I really want to probe in this article is the second part of LQG: articulating what we care about.
Humans typically care about getting the maximum reward with the least amount of effort. For those of you who say, “that sounds lazy, I’m not lazy, I work out every day,” I’d just articulate your reward in terms of things like pride, strength, or endorphins. For those of you who say, “that sounds selfish, I care about others,” I’d just articulate your reward in terms of the joy you bring to others. And if you really think about it, it makes sense. I’m not saying you should expend the least amount of effort you can; I’m just saying you should maximize the reward/effort ratio, because all of us only have so much effort we can produce. If time is the currency of our effort, for example, most of us only have 24 hours in a day – what is the best way we can spend that time to maximize the things we care about?
This concept of reward doesn’t need to be a singular idea either, although some recent theories offer hope of providing a unifying framework2. For example, some of us might say we care about getting enough sleep, and exercising, and eating well, and spending time with loved ones, and helping others, and making an impact in the world. We can even say we’re willing to slack off a little on one of those ideas if it results in substantially more of the other (e.g., get by with a bit less sleep today if it means I can help someone a lot), with notions that the amount we deviate has increasing implications (e.g., I’m willing to get by on 7 hours of sleep for a night to make an impact in a student’s life, but the cost is much higher for me to consider getting by on 20 minutes of sleep for 2 weeks straight, no matter how many students I’m able to help).
Even for simple questions, most people care about more things than they realize, and these values significantly influence their actions. Let me give you a simple example to illustrate.
Let’s keep things simple and imagine that you are straightforward person who only values money, and that you are about to act in a way that gains or loses you money. While some of you may say there’s more to life than money, indulge me for a minute, because I want to show how quirky values pop up even in this simple example. Okay – having put yourself in this mindset, I’m about to give you two options, and of course, you’ll choose the one that results in you getting more money or losing less money, right? Read on …
Imagine that I offer you a choice of two boxes:
Which box would you choose? If the only thing we care about is money, we should all choose the red box. The average value of the red box is $500,000, which is more than $490,000, so we should choose it3. But most of my students, and myself, would choose the blue box without a backwards glance. Why? Because even if we’re trying really hard to only care about the money, most of us don’t value risk. It’s not like someone’s going to offer us this chance ten times in our life – this is our only shot, so why not take the sure bet even if it’s worth $10,000 less? To think of it a different way, if we end up with $490,000 and are told the result of the coin toss would have give us $1,000,000, we’ll only experience mild regret, whereas if we end up getting $0, and realize we could have received $490,000, we’ll experience a ton of regret. Whether we add risk-aversion or regret aversion as a value to our optimization, it’s clear that there are implicit terms in our cost function that affect our actions.
The take-away here is that we need to acknowledge that our actions are informed by a variety of cost-functions, some of which we are explicitly aware and some of which we don’t even think about but still influence our decisions to maximize the things we care about.
In my last article, I argued that humans are suboptimal in their statistical calculations—they think they are good at it (they’re confident in their gut), but their confidence is misplaced, even if they’re brilliant statisticians! In this article, I’m taking a similar tact—that humans care about things they don’t explicitly realize they care about, and that those values implicitly influence their actions.
Is it a terrible thing that we aren’t aware of the implicit values that inform our actions? In the case of perception we saw that it sometimes caused us to make faulty inferences, and that there were concrete steps we could take to do things better, and these concrete steps had important implications for how we engaged with the world. In the example above, making the implicit value of risk-aversion explicit didn’t change your mind or your action – you still probably chose the $490,000 (at least I hope you did!). But I will attempt to show some examples below in which we think we have all the values on the table and that we are making optimal, rational decisions, and we get upset when someone adds an artificial value to the equation. I’d like to question whether they are adding an artificial value or just making an implicit value explicit. Before doing that, it will be helpful to introduce one more concept that our brains have trouble wrestling with when we make actions, and that is the concept of time and it’s twin sibling, momentum.
Time. Humans have trouble understanding the influence of time in their actions, but the remarkable thing is – the way humans value time affects just about all of the actions we take in the same way. Simply put, we value things more if we get them now and we value them less if we must wait. Everyone who has observed a child or a puppy has observed this – the treat is much more valuable if I can have it now than if I must wait 20 minutes (or one year) to receive it. Humans discount the value of things if the reward they receive doesn’t happen until the future, and this allows us to make decisions (should I choose the green box, in which I get $350 now, or should I choose the orange box, in which I get $1000 in 5 years4)?
The fascinating thing is that humans discount time the same whether we are talking about milliseconds, seconds, days, or even years, and that we discount time the same whether we are talking about moving our eyeballs, our arms, our car, or our money. If you’ve ever wondered why your reach for a glass of water takes about 1 second, vs. 0.1 seconds or 1 minute, or why you choose to invest or not invest in your child’s university education when they’re a toddler, the answer is because you discount the value of things in the future. The key takeaway here is that if we are comparing options or events that happen at different periods of time, we need to explicitly incorporate the value or discounting of time in the conversation.
Momentum. Almost every time I use a washing machine, I marvel at how much time I’ve saved vs. doing it myself, and how much money I’ve saved vs. hiring someone else to wash my laundry by hand. Machines have changed our lives, reducing the amount of time things take and the cost of getting them done. Even if all the washing machines break 10 years from now and I have to wash my own clothes from then on, think of all the books I’ll have had time to read and all the knowledge I’ll have gained that I otherwise wouldn’t have – I am definitely benefiting from having a washing machine.
Momentum is the twin sibling of time – the same concept, but a different emphasis. For some, it may be easier to think of things by discounting values according to time. For others, it may be easier to acknowledge that when balancing two sides (that’s what an equation is, after all – equating two groups of things), time needs to be added to the balance on one side if there is a difference in time.
LQG is relevant in so many ways, it’s really hard to know where to start! Any conversation in which the actions are important benefits from ensuring that all the costs are explicitly known and carefully considered. Of course, there are many values we don’t even know that we don’t know, but as we learn more, we can include those values that we do know about or even debate whether they are appropriate or pertinent to the discussion at hand. One of the key goals of this article is to give us a framework from which we can debate the appropriateness of various values. So, with that mindset, lets dive into some particularly controversial topics, to see if this framework informs the conversation. My goal here is not to convince anyone (I’ll intentionally choose polarizing topics), and if you say “Ah, you forgot an important value to include in the equation!” my job in writing this article will have been accomplished.
Many of my engineering colleagues get frustrated by the concept of targeted hires to compensate for a perceived lack of diversity along some facet (race, gender, etc.). They are rationalists and purists who believe that we should only assess candidates on the values (attributes) that are relevant to that position—and I agree.
But I also have some questions. In no particular order, they include:
It is clear where I land on this controversial topic, and many who read this (possibly the majority) will land on the other side. And that’s okay – the key thing is that within this scaffolding we can add on even more values. My presentation of values has been one-sided—but the beauty of this approach is that we can add more; scratch some off the list, and keep going, until we have an explicit understanding of the dynamics at play in making decisions.
Is the concept of privileged relevant? Many on the left (both those who have and those who do not have privilege) would argue that it is; many on the right (both those who have and those who do not have privilege) would argue that it is not relevant.
From a value-function perspective, and considering time and momentum, it is hard to envision a scenario when privilege doesn’t influence how we got to where we are, or the actions that will occur in the future if it is not accounted for6. I won’t reiterate most of the points made in the targeted hire section, because many of them are the same, but unless we value privilege as an explicit value function, we need to be aware that a) it has a temporal component; b) that it influences our actions; and that c) it influences the value-functions used to assess the actions of others.
If I have been successful, I have convinced you that we have implicit values that makes sense but of which we are unaware and that these values influence our actions in ways we often don’t perceive. I hope I have also made a case that it’s possible to explicitly describe these values, and that doing so enables people who disagree to do so in a framework that helps them to value each other’s perspectives. Indeed, in a future version of this article I hope to include counterpoints from colleagues who disagree with me, just to illustrate this point more convincingly. And finally, I hope I have made the case that it is okay to include explicit value-functions when making decisions, and that doing so does not artificially ruin the decision-making process. Indeed, I see the explicit articulation of value-functions, in consultation with the appropriate communities, as one of the most important steps to creating a better society.
1. The mathematicians in the room may quibble that in the last article I really spoke about Bayesian inference, of which Kalman filters are a subset, and that in this article I’m talking about optimal control, of which Linear Quadratic Gaussian’s are a subset, but who can resist using the cute nickname of LQG? In all seriousness, we respond better to personas than we do abstractions, so I’d much rather have us think about LQG than the abstraction of optimal control. Return to text.
2. For those who are brilliant and bored, I recommend the Free Energy principle, which is best articulated in a freely available book titled Active Inference. Return to text.
3. Technically it is the expected value. It is the way we make decisions when confronted by multiple possibilities, by multiplying the probability of an event by the reward we’d get if that event came true. Return to text.
4. To keep the question simple, imagine there is no inflation, or if it’s easier, that the amounts will be compensated for inflation. Return to text.
5. The book Invisible Women provides many examples of the values that are not accounted for when women are not part of the decision-making process, and the detrimental impact to everyone that this has. Return to text.
6. To be clear, my goal here is not to convince everyone that they should account for privilege; it’s to provide a scaffolding from which the topic can be debated. While I have trouble envisioning a scenario in which privilege isn't factored in some way into the equation, I welcome dialogue within this framework or others to articulate such scenarios. Return to text.