Bias in the Algorithm: Thoughts for Leaders on Their Use of AI

Artificial intelligence feels revolutionary as an efficient, intelligent assistant that can help leaders analyze data, draft messaging, and train teams. As useful as AI is, we must remind ourselves that it’s not neutral.

AI aggregates knowledge based on existing patterns where we know those sources carry histories, blind spots, and distortions that can easily become baked into the outputs. AI reflects the sampled world, which is never the whole picture.

Bias in the Algorithm: Thoughts for Leaders on Their Use of AI

Why Bias in AI Matters for Leadership

1. AI Is Only as Good as Its Training Data: If the data used to train an AI model doesn’t reflect the diversity and complexity of the real world, the outputs will skew toward the biases present in the sample. Even the large datasets are not neutral mirrors of the world. They’re scraped and collected through specific channels (mostly internet sources), which means entire groups and experiences may be absent or underrepresented.

2. AI Feels Confident—Yet Isn’t Always Correct: Large Language Models like GPT-3.5 and GPT-4 may deliver polished, confident responses—and they also exhibit human-like cognitive biases. In a structured study testing 18 common cognitive biases, AI systems reflected persistent errors such as confirmation bias and the hot-hand fallacy, especially in ambiguous scenarios. AI was more accurate in formulaic tasks and still prone to irrational reasoning when subjectivity entered play. (Live Science) Confirmation bias is the tendency for people to seek out, interpret, and remember information in a way that supports their existing beliefs, while ignoring or discounting information that contradicts them. Differently, the hot hand fallacy is believing streaks or patterns exist in random events.

3. Bias Is Sometimes Invisible—and Multiplies Over Time: Bias can spread subtly between AI models, even when content appears neutral. Researchers at Anthropic, UC Berkeley, and others uncovered “subliminal learning”—a mechanism where hidden patterns passed between AI “teacher” and “student” models transfer undesirable traits silently. Filtering explicit content alone may not prevent these biases from propagating. (Tom’s Guide)

4. Real-World Consequences Are Significant: Bias in AI systems has real impact. For example, a University of Washington study found that resume-screening systems favored resumes with names perceived as “white” in 85% of comparisons, and “male” names in 52% of cases, regardless of qualifications—reflecting discriminatory patterns at scale. (peoplemattersglobal.com). In educational settings, AI tools creating behavior intervention plans have recommended harsher responses for students with Black-coded names, compared to more supportive plans for white-coded names. (Chalkbeat)

5. Human Decision-Making Can Be Influenced by AI Bias: AI isn’t just biased—it can amplify human bias through feedback loops. Research from University College London showed that when participants made biased judgments and then saw AI’s response, that AI’s feedback heightened their bias—even when the AI wasn’t explicitly malicious. Over time, people adopted more biased responses, even when AI inputs were removed. (Reddit)

Leadership in the Age of AI: Treat It as a Tool, Not Truth

Savvy leaders recognize AI’s strengths—and guard against its weaknesses. Here are four critical practices:

  1. Treat AI as a First Draft, Not a Final Answer: Use AI to brainstorm, outline, or generate ideas—and always review critically. Apply your own expertise, context, and judgment before relying on its output.
  2. Ask, “Whose Voice is Missing?”: Pause and reflect: What perspectives may not be represented in the AI’s data? Who could be disadvantaged by its suggestions?
  3. Demand Transparency and Maintain Oversight: Know that AI systems are not jury-free verdicts. Encourage research-backed, explainable tools—and always keep humans in the loop, especially in high-stakes decisions. (University of Oklahoma, IMD Business School)
  4. Encourage Team Dialogue—Not AI Dictation: Share AI-generated insights as starting points, not solutions. Keep discussions collaborative, so that teams interrogate—and improve upon—what AI offers.

Leading in an AI-Powered World

AI will remain a growing force across training, strategy, and operations. As with any new tool, the best leaders are those who balance openness with skepticism. By recognizing AI as a powerful assistant—and not an infallible truth teller—leaders preserve critical thinking and ethical grounding in their organizations.

In the spirit of improvisation: stay curious, question assumptions, and use every tool with intention and integrity. While AI is a remarkable partner, it much not be a substitute for thoughtful, responsible leadership.