BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Artificial Intelligence, Leader Decisions, And Coulda Woulda Shoulda Thinking

Following

Though it is only June, it is safe to say that for many leaders, artificial intelligence is the topic of 2023. The emergence of easy-to-use tools like ChatGPT presents industry-disrupting challenges across a broad spectrum of the economy, from higher education to the practice of law to health care, information technology, and beyond.

Claiming that decision-making is a critical managerial skill should not be controversial. It is evident that poor decisions can be costly. As AI decision support tools propagate and improve, how and when to deploy them to make workplace decisions is the question leaders are now trying to answer. While the ability to collect and sort through troves of data to identify patterns or to quickly assess the relative merit of thousands of solutions to a complex problem should lead to better decisions, how quickly will leaders learn to trust AI-generated decisions remains to be seen.

All employees face decisions in the conduct of their jobs. To protect organizations from poor decisions, bureaucracy takes discretion out of the more routine decisions encountered through enacting rules and procedures. By comparison, leader decisions are resistant to such programming because there are too many variables to model, there needs to be more precision in how those variables are measured, or there needs to be a complete understanding of how the variables are interrelated. The resulting vagueness and complexity have been enough to stump an algorithm. Or at least to stump the algorithms we have had in the past.

These decision attributes – vagueness and complexity – are difficult for algorithms and tricky for decision-makers because they open the door for second-guessing any decision. We have all experienced this post-decision regret. One question raised by the emergence of easily accessible AI tools is whether they will eliminate the self-doubt felt by leaders after they make a big decision.

Some years ago, after a painful loss, New Orleans Saints football coach Jim Mora offered a memorable expression to capture the frustration of post-decision regret. He decried “coulda woulda shoulda” thinking in his post-game press conference. Coach Mora vented that it was no use to make excuses or to question what had happened or could have been different. Instead, all that mattered was what the team did next. Thankfully for Mora, his team, and its fans, what came next was for the Saints to win nine consecutive games and make the playoffs for the first time in franchise history.

Mora made a compelling case – and then backed it up with results – that coulda woulda shoulda thinking is not what makes the team better. He observed that no one on a good team ever utters that phrase. Ever. First, they don’t need to say it because they win. Second, players on good teams recognize that the temptation to say it only arises if your team is not good enough. From Mora’s perspective, there is no sense in grousing around for the salve provided by coulda woulda shoulda. Players and coaches should only focus on what can make the team better.

People have written about ways to cope with decision regret or dissonance. Certainly, finding a way to be less gloomy is therapeutic. However, the more productive work is to get better. Winning – or for a manager making a great decision – is the best way to avoid these discomforting and unproductive feelings.

It is asking a lot from someone to avoid slipping into coulda woulda shoulda thinking. Regret is hard-wired into our being – we naturally revisit decisions made under conditions of uncertainty and wonder if we could have done better. It’s good that we might learn from a decision how to get better outcomes next time. It’s a bad thing because managing dissonance feeds self-doubt and shifts the focus of mental energy from the future to the past. Will AI support workplace decisions that are not regrettable? What will it take for leaders to trust that AI hasn’t left them vulnerable to self-doubt about a consequential decision?

The tools AI continues to provide and evolve will no doubt reduce the number of decisions where the human element introduces room for revisiting and regretting each decision. However, coulda woulda shoulda will be with us for some time. AI is far from integrating the many moral and ethical intangibles people try to incorporate into their decisions. Until it does – and until it is trusted when doing so - leaders will have to watch out for the corrosive consequences of coulda woulda shoulda thinking. Specifically, the distraction of regret and its ability to put a leader's attention on reducing dissonance by looking in the rear-view mirror rather than out the windshield at the road ahead and understanding how to get better. Curiously, some have argued AI decision-making needs to have some regret programmed into it to help it learn in a more human-like way. Perhaps it won’t be long before AI will roll out a coulda woulda shoulda application so leaders needn’t do their own work to feel dissonance.

Follow me on TwitterCheck out my website