a-purple-background-with-a-lot-of-white-faces-and-a-purple-megaphone

Reclaiming Attention in the Age of AI

Ben Szuhaj
,
AI Strategist

Insights from our first CO.LAB session

Recently, we launched CO.LAB, a new discussion series exploring the evolving relationship between humans and AI through futures-oriented co-design. Held during our weekly Lab Day—a space for staying ahead of rapid advancements in our field—this inaugural session tackled a fundamental question:

AI promises to free up our time—but how can we design systems that give us back control over our attention?

Rather than debating the risks and benefits of technologies like AI in deterministic terms, CO.LAB reframes these conversations as design challenges, focusing on what we want to happen, rather than passively accepting what might.

This first session set the tone for CO.LAB as a space for intentional provocation, diverse perspectives, and thoughtful design exploration. Here’s a recap of the key insights and takeaways from our discussion.

The Cost of Divided Attention

Our discussion began with a stark reality check: Per day, Americans spend an average of 4 hours and 30 minutes on their phones, up 52% from 2022, when they used their phones daily for 2 hours and 54 minutes. This is expected to increase to 4 hours and 39 minutes this year. Not only that, but a majority of Americans (~57%) consider themselves 'mobile phone addicts.' (all data above from Consumer Affairs).

This increasing dependence on mobile technology has broader implications. Many participants in our session expressed concerns that our fragmented attention isn’t just a personal struggle—it actively shapes our perception of reality. Some highlighted how social media feeds dictate not just what we see but what we value, replacing curiosity with certainty. Others pointed out how the monetization of engagement distorts public discourse: in a system where profit maximization drives content visibility, misinformation thrives when it is designed to be attention-grabbing—often by making it novel, emotionally charged, or aligned with personal and social identities. Research published by the American Psychological Association supports this, showing that people are more likely to share misinformation that meets these conditions. However, not only are people more likely to share such content, but algorithms also amplify it, prioritizing engagement metrics that favor sensationalism over accuracy, further entrenching misinformation in public discourse.

Some participants framed this as a structural problem. As in, platforms prioritize engagement over well-being, designing interfaces that maximize time spent rather than user benefit. Others saw it as a social problem, emphasizing how digital culture has rewired the way we seek validation and social belonging. Either way, the consensus was clear: unchecked, these forces can erode our ability to focus, think critically, and engage meaningfully with the world.

One striking example discussed was how recommendation algorithms are fine-tuned to exploit cognitive biases. Participants referenced research showing that platforms could significantly reduce the spread of misinformation simply by limiting reshares after two degrees of separation. The Center for Humane Technology’s #OneClickSafer initiative highlights how frictionless sharing enables the rapid spread of harmful content—misinformation, hate speech, and violence—by removing barriers to thoughtful engagement. Their initiative proposes a simple yet effective solution: restricting the reshare button after two levels of sharing. While users could still copy and paste content to share it further, the added friction would slow the viral spread of misinformation and harmful content. Research from Facebook’s own Integrity Team confirmed that this approach was more effective at curbing harmful content than many of the costly moderation strategies subsequently implemented. However, the solution was deprioritized due to concerns over its potential impact on engagement metrics, underscoring the persistent conflict between ethical design principles and profit-driven priorities within the tech industry.

The Hidden Trade-offs

While many of us recognize the issue, breaking free from it is difficult. Several participants pointed out that our brains are wired for instant gratification—making a dopamine hit from social media far more appealing than slower, more rewarding activities like reading or exercise. Others noted that information overload has led to shallower engagement: in an era of infinite scroll, we tend to remember emotions, not facts. One participant compared this to the "Dorito effect"—just as processed foods rewire our taste buds, digital consumption reshapes our mental habits, making authentic engagement feel less satisfying.

Another key trade-off emerged: our overreliance on digital tools to manage our attention paradoxically weakens our ability to regulate it ourselves. If every moment of boredom is filled by algorithmically-curated content, we lose the capacity to sit with our thoughts, problem-solve, and cultivate deep focus, ultimately eroding our creativity and autonomy.

Participants also noted that many of the most engaging digital experiences—whether social media, news, or even workplace tools—create an illusion of productivity while actually draining cognitive resources. The sheer volume of notifications, context-switching, and algorithmic nudging makes deep work more challenging, leading to a world where focus is increasingly fragmented.

How Can We Take Back Control?

If attention is a finite resource, how do we protect it? The conversation produced a range of perspectives:

  • Rethinking Design Choices: Some suggested simple UI changes, like limiting infinite scroll, introducing auto-shutoff features for overused apps, or making the interface more reflective of real-world time constraints.
  • AI as an Attention Ally: Instead of competing for our focus, AI could help us manage it. One idea was an AI assistant that learns personal rhythms and nudges users toward healthier behaviors—reminding them to step away from the screen when they’re over-engaged, or even integrating with wearables to monitor stress levels and suggest breaks.
  • Changing Incentives: Some advocated for shifting ownership models. What if digital platforms were cooperatively owned, prioritizing user well-being over ad revenue? Others suggested policy interventions that would require transparency in algorithmic decision-making, giving users greater control over their digital environments.
  • Leveraging Social Norms: Cultural shifts often drive change faster than policy. Could mindful tech use become aspirational, much like trends toward digital minimalism? Some participants argued that embedding digital well-being into education—teaching children how to critically engage with online spaces—could create lasting behavioral change.
  • Physical Infrastructure for Attention: Participants explored the idea that digital distractions are exacerbated by a lack of physical spaces designed for deep work and social connection. Could investment in libraries, coworking spaces, or even digital-free zones in cities help counterbalance the omnipresence of screens? 
  • Locally-Owned Digital Infrastructure: Beyond physical spaces, participants discussed how locally-owned digital infrastructure—such as community-driven social networks, cooperatively owned digital platforms, and municipal broadband—could foster healthier online interactions, encourage more intentional engagement, and help communities reclaim digital autonomy from profit-driven tech giants. Crucially, local communities would have the power to set the incentive structures and business models for these platforms, ensuring that digital spaces align with communal values rather than external profit motives. One proposed mechanism for achieving this was…
  • An attention tax: We tax the transacting of money, so how crazy would it be to tax the transacting of attention? Such a policy–while farflung–could function as a regulatory mechanism to curb the harmful effects of exploitative digital practices. Revenue generated from such an attention tax could be redirected to support community well-being initiatives, fund local digital literacy programs, or subsidize alternative, ethical technology solutions. By integrating financial disincentives for extractive attention economies, we could rebalance the digital landscape in favor of more meaningful and intentional interactions.

Designing a Future of Intentional Attention

So what does an ideal future look like? For some, it’s about better tools. An AI assistant that subtly guides users toward optimal habits. A digital dashboard that gives individuals as much insight into their own data as advertisers currently have. Others emphasized the need for more systemic intervention: local community infrastructure, public spaces designed for social interaction rather than digital isolation, policy levers that regulate the attention economy.

Some participants envisioned a world where technology helps enhance self-awareness rather than diminish it. Imagine an AI-powered interface that not only tracks screen time but also provides qualitative insights: "You tend to feel more anxious after 45 minutes on social media—consider taking a break." Others imagined a system where platforms automatically adjust to minimize disruptive notifications during deep-focus tasks, preventing the need for users to self-regulate constantly.

Ultimately, reclaiming attention isn’t just about individual self-discipline—it’s about redesigning the systems that shape our digital lives. At KUNGFU.AI, we believe AI can be a force for autonomy, not just engagement.

If you’re thinking about how AI can shape healthier digital ecosystems, we’d love to collaborate. Let’s build a future where technology works for our attention, not against it. If you're interested in learning more you can reach out to me directly at ben.szuhaj@kungfu.ai.

Related resources

No items found.