In the growing discourse on artificial intelligence and its potential to transform workplaces, new research from Harvard Business School offers a compelling reminder of why humans still hold a critical edge over machines—particularly when it comes to adapting to change. According to Julian De Freitas, assistant professor at Harvard and director of the Ethical Intelligence Lab, while AI is excellent at specialized tasks, it lacks a core human trait: the ability to “self-orient” across rapidly shifting environments.
The study, published in Nature Human Behaviour under the title “Self-Orienting in Human and Machine Learning,” delves into how humans instinctively adjust their perception of self in new settings—something machines are far from mastering. Whether it’s waking up in an unfamiliar hotel room or jumping between mobile apps, humans perform split-second mental pivots to recalibrate their understanding of where they are and what actions are required. This innate flexibility is still out of reach for even the most advanced AI systems.
De Freitas and his co-authors—including researchers from Bilkent University, Yale University, MIT, and Harvard’s Psychology Department—designed a set of experiments using four increasingly complex video games. Both humans and reinforcement-learning algorithms were tasked with identifying and navigating their in-game avatars (represented by red squares) to reach a goal. While a player had to determine which avatar they controlled before moving it to the goal using arrow keys, AI was programmed to rely on visual inputs to make decisions.
The result? Humans won every time. Despite the rising complexity of the tasks, human players consistently demonstrated the ability to self-orient and adapt, while the AI agents floundered when the environment demanded flexibility. “People were solving everything faster,” said De Freitas. “Self-orientation doesn’t seem to exist at all for AI.”
The implications of this research are far-reaching, particularly for sectors that are increasingly leaning on AI to handle dynamic challenges. De Freitas points to real-world examples like autonomous vehicles or medical care, where a rigid response from AI could result in failures or even harm. In contrast, humans can instantly reframe the problem and act accordingly—for instance, a doctor shifting focus from a healthy patient to an elderly one who needs additional support navigating the clinic.
De Freitas warns that current AI systems attempt to achieve this kind of adaptability by training on vast datasets, hoping to cover every possible scenario. But this brute-force method doesn’t compare to human cognition. “Humans adapt; they continuously understand where they are in the world and what problem they are solving in response to changing circumstances far better than current AI does,” he notes.
For companies aiming to integrate AI into everyday operations, the research serves as a timely caution. De Freitas advises managers to recognize the limitations of AI in fast-changing environments and to approach implementation with a nuanced understanding of where machines may fail. “If you more deeply understand why your AI systems are limited, you are probably better equipped to know when and how to deploy them in practice.”
He emphasizes that identifying and acknowledging this gap in adaptability is essential for improving AI systems or supplementing them with human oversight. “All managers want these systems to be adaptive, intuitive, and have broad applications,” he adds. “Our work identifies a key reason why that’s still hard.”
As businesses continue to explore automation and AI integration, the research underscores a fundamental truth: machines may be powerful, but for now, the human mind remains unparalleled in its agility.




