AI and Human Behaviour: How Artificial Intelligence Shapes Attention, Emotions, and Identity
- Dr. Sunil E. Jadhav
- Jan 7
- 4 min read
Updated: Jan 28
Introduction: A Silent Companion in Our Daily Lives
Picture this: your alarm clock wakes you at just the right moment in your sleep cycle. You check your phone, where your social media feed is already filled with stories and updates you’ll likely enjoy. Later in the day, you ask your digital assistant for directions, shop for groceries online where “recommended items” appear, and finally unwind by watching a series suggested to you based on your mood and habits.

By the end of an ordinary day, AI and human behaviour are already deeply intertwined. Artificial Intelligence has quietly shaped what you saw, how you felt, and the choices you made—often without you being fully aware of it. This constant, invisible influence raises a critical question: Are humans shaping AI, or is AI shaping human behaviour? Psychology, as the scientific study of mind and behaviour, offers powerful tools to explore this evolving relationship.
AI Through a Psychological Lens
Artificial Intelligence is no longer limited to futuristic robots or science fiction narratives. It is embedded in everyday life—streaming platforms, navigation apps, voice assistants, smart devices, chatbots, recommendation engines, and even mental health tools.
What distinguishes AI from earlier technologies is its ability to learn, adapt, and interact. Unlike television or radio, which passively delivered content, AI systems actively listen, respond, and predict. They personalize experiences for each individual, subtly guiding decisions and behaviours.
For psychologists, this makes AI and human behaviour a compelling area of study. AI influences attention, emotion regulation, decision-making, identity formation, and even our sense of autonomy—core psychological domains.
AI and Attention: The Battle for Focus
Attention is one of the most valuable human resources, yet AI-driven systems are designed to capture and retain it for as long as possible. Social media platforms such as Instagram, TikTok, and YouTube do not simply display content—they learn what keeps users engaged and continuously serve more of it.
This process closely mirrors principles of operant conditioning. Just as a pigeon pecks a lever in anticipation of a reward, humans scroll endlessly in anticipation of the next emotionally stimulating post. The unpredictability of rewards—sometimes amusing, sometimes emotionally validating—creates compulsive engagement.
Psychological consequences include:
Shortened attention spans
Difficulty focusing on offline tasks
Restlessness or anxiety when disconnected
In essence, AI has gamified attention, making disengagement increasingly difficult.
AI and Emotions: Machines That “Understand” Feelings
A major development in AI is affective computing—machines that detect, interpret, and respond to human emotions. Smartphone cameras can recognize facial expressions, while chatbots may detect emotional tone in language and adjust responses accordingly.
On the positive side, AI-based mental health tools can reduce loneliness, offer emotional validation, and deliver evidence-based strategies from Cognitive Behavioral Therapy. Elderly individuals and people experiencing social isolation may find comfort in AI companions.
However, emotional needs are complex and deeply relational. Overreliance on machines for emotional support risks weakening genuine human empathy. AI may simulate understanding, but it lacks mutual emotional investment—an essential element of real connection.
Moreover, AI-driven emotional targeting is widely used in advertising. Online platforms tailor content and products based on emotional states, blurring the line between assistance and psychological manipulation.
AI and Decision-Making: Who Holds the Reins?
Human decision-making often relies on cognitive shortcuts. AI systems leverage these tendencies by offering constant algorithmic “nudges.”
Examples include:
Online shopping suggestions such as “People also bought…”
Political content filtered to reinforce existing beliefs
Dating apps ranking potential partners
While these features increase convenience, they raise concerns about autonomy. In psychology, the locus of control refers to whether individuals perceive outcomes as driven by their own actions or by external forces. As AI increasingly guides choices, individuals may shift toward an external locus of control—feeling less personally responsible for decisions.
This shift has profound implications for motivation, responsibility, and psychological agency.
AI and Identity: Constructing the Digital Self
Identity is shaped through social interaction, and today, much of that interaction is mediated by AI. Social media platforms function not only as spaces for expression but also as systems of feedback—likes, shares, comments, and followers.
For adolescents, this is particularly impactful. During a critical stage of identity formation, algorithm-driven validation communicates what is desirable, popular, or acceptable.
This can lead to:
Inflated self-worth based on engagement metrics
Social comparison, anxiety, and low self-esteem
Fragmented or curated digital identities
AI does not merely reflect identity—it actively participates in shaping it.
Ethics, Trust, and Autonomy in AI and Human Behaviour
The psychological implications of AI raise important ethical questions:
1. Bias
AI systems learn from historical data. If that data reflects social bias, AI can reinforce inequality. Biased hiring algorithms and flawed facial recognition systems illustrate this risk.
2. Trust
Humans often either overtrust or undertrust AI. Overtrust can lead to blind dependence, while undertrust may cause rejection of accurate medical or psychological guidance.
3. Dependency
Excessive reliance on AI may weaken critical thinking and problem-solving skills. When thinking is consistently outsourced, cognitive development—especially in children—may be compromised.
Case Studies: AI in Everyday Life
Netflix and the Binge-Watching Phenomenon
Netflix’s recommendation algorithms predict user engagement and encourage prolonged viewing. This has contributed to binge-watching behaviours associated with sleep disruption, reduced productivity, and patterns resembling behavioural addiction.
AI Therapy: Helpful or Hollow?
Chatbots such as Woebot provide accessible mental health strategies. While users often report short-term relief, therapy is not merely technical—it is relational. Whether AI can replicate human empathy remains an open question.
Social Media Echo Chambers
AI-driven feeds reinforce existing beliefs, creating echo chambers. Psychologically, this contributes to group polarization, increased hostility, and reduced openness to alternative viewpoints.
Conclusion: The Human–AI Equation
Artificial Intelligence is not a distant future—it is a silent companion shaping daily choices, emotions, and identities. The psychology of AI and human behaviour reveals both risk and potential.
The challenge lies in protecting attention, empathy, autonomy, and critical thinking in a world optimized for engagement and prediction. The opportunity lies in ethically designed AI that expands access to education, mental health support, and meaningful connections.
The true test of intelligence in the 21st century may not be how advanced AI becomes—but how wisely humans learn to live alongside it.
_edited.jpg)



Comments