The unfortunate incident of a teen committing suicide in the USA as a result of getting addicted to an AI bot is a sobering reminder of the risks posed by addictive tech and AI products. The child in this case developed deep emotional attachment to the bot, eventually isolating himself from the real world and minimizing all human interactions.
For years, reports have highlighted the negative effects of social media on human behaviour — especially among children — with one of the major concerns being its addictive nature. While the effects of excessive screen time and reduced real-world interaction are well documented, the rise of AI bots and companion tools has added further complexities.
Despite growing evidence from ongoing research, tech giants and social media apps have failed to take robust steps to counteract addiction and mental health hazards. Even newer entrants in the tech space often focus not on building sustainable, healthy products — but on chasing virality. The result? A race to the bottom where the most engaging product wins, regardless of the cost to user wellbeing.
Sadly, it’s not difficult to see why this happens. The success metrics that these products chase — and that investors reward — do not hold them accountable for harm. In fact, they often encourage it. “User engagement” remains the dominant success metric.
When a platform’s success is measured by how long users spend on it, how frequently they return, and how addicted they become, addiction is not a problem — it’s a business model. In such a system, the interests of founders and investors are in direct conflict with the well-being of their users, especially young minds.
With the advent of GenAI tools like ChatGPT, the cost of sticking to these outdated metrics could be dangerously high. The emotional intimacy and dependence that these tools can generate — particularly in young, developing minds — raise concerns far greater than those posed by traditional social media. The damage is real, and it is already unfolding.
Of course, regulations and governance play an important role. But meaningful change won’t happen unless the very definition of success changes — for founders, for investors, and for the broader tech ecosystem. We need to stop chasing products that are merely viral. We need to reward those that are loved, trusted, and above all, sustainable.
Can the age-old argument that for-profit enterprises bear only limited responsibility toward social welfare still hold in the dangerous times we are entering?
While it may be hard for individual users to change how tech giants operate, awareness — especially among parents, educators, and children — cannot be overlooked. We as individuals must pause before celebrating every overnight tech success. If those products are driven purely by engagement and virality, we must ask: at what cost?
If society keeps glorifying entrepreneurs who build addictive, engagement-first platforms, we will continue discouraging those working to design ethically responsible technology. Until the incentives change, engagement-led hypergrowth will remain the norm — and ethical design will remain the exception.
But what’s the alternative?
What could new, more responsible success metrics look like?
How can we measure the impact of AI and social media platforms beyond just engagement?
Can investors begin evaluating teams and products based on trust, long-term value, or user well-being?
These are the questions worth raising. And by raising one question at a time, we can collectively push for a safer, more responsible, and accountable tech culture.
Would love to hear your thoughts in the comments.