Reflections from Duke University’s 2026 Conference on Society-Centered AI
Erin Worsham, Executive Director of CASE
February 19 2026
I walk away from most conversations about AI these days feeling both energized and terrified.
On one side: rapid capability gains, productivity promises, and emerging markets calling it the “great leveler.”
On the other: biased hiring tools, deepfakes eroding trust, stressed workers improvising without training, and organizations adopting tools without a plan.
At Duke University’s 2026 Conference on Society-Centered AI, it was clear that the truth is in between. The question is not whether AI will advance. It’s whether we will shape its implementation in ways that expand dignity and opportunity.
As Reggie Townsend, Vice President of the Data Ethics Practice at SAS, asked: What would it look like for AI to reduce human suffering?
Here are three observations from the conference to help us get there:
1. This Is An Implementation Story, Not Just a Technology Story
AI capabilities are accelerating at an astonishing pace. But as Arvind Narayanan, Professor and Director of the Center for Information Technology Policy at Princeton University, reminded the audience, invention is only the first stage of transformation.
Electricity did not revolutionize factories when it was invented. It transformed industry only after companies redesigned workflows, retrained workers, and rebuilt infrastructure around it.
AI is no different.
Yes, model capabilities are advancing at an astonishing pace. But integration and adoption are much slower and harder. The toughest challenges are downstream of the technology: learning curves, behavior change, workflow redesign, and institutional reform.
Ronnie Chatterji, Duke University Professor and Chief Economist at OpenAI, reinforced this, noting that AI adoption requires more — not less — human leadership. Organizations that simply “add AI” without change management, training, and redesign are unlikely to see meaningful impact.
That means leaders should ask:
- Are we redesigning workflows or just layering tools onto old systems?
- Are we investing in training so staff can use AI confidently and responsibly?
- Are senior leaders visibly driving this change?
Start small. Deploy incrementally to get adaption and adoption right. Measure outcomes to ensure intended impact. Recognize that the technology is just one piece of the puzzle.
2. Good Intentions Are Not a Governance Strategy
As Townsend observed, “Good intentions have a tendency to weaken under pressure” — especially profit pressure.
Most organizations claim they want to use AI responsibly. But, I’ll be honest — I am unwilling to rely on the good intentions of corporate leaders or tech executives alone to ensure AI benefits society.
We have enough warning signs. Audits show bias in AI hiring tools. Deepfakes threaten trust. Workers using AI without training report higher stress. If AI is going to lift people up rather than leave them behind, governance cannot be an afterthought. It must be embedded.
That includes:
- Clear norms around human accountability for AI outputs (“human in the loop”).
- Policies on conducting audits in high-risk domains.
- Incorporating affected stakeholders into design processes.
- Aligning incentives and measurement around human outcomes, not just cost savings.
3. Augmentation — Not Automation — Must Be Matched With Equity
Much of the anxiety around AI centers on job loss. But research presented by Yale University Professor Nicholas Christakis and others suggests a more nuanced reality: humans plus AI often outperform either alone.
Doctors paired with AI improve diagnostic outcomes. Lawyers become more productive, sometimes increasing demand for their services. Radiologists were not eliminated as predicted, imaging per patient actually increased.
The frame bought in by speakers was that of AI as augmentation (complementary to humans, helping people do their work better or faster), not automation or replacement of people.
But let’s be very clear here: augmentation does not automatically mean equity and inclusion.
Chatterji described what he called a “capability overhang”: sophisticated users are accelerating ahead while median users are still learning the basics. We’re also seeing gender disparities in enterprise AI adoption and risks to entry-level roles — positions that often filled by more vulnerable populations and also serve as pipelines into higher paid positions.
Without deliberate attention, AI could amplify inequity rather than lifting opportunity. So, leaders should:
- Protect entry-level training pipelines so future leaders are developed.
- Invest intentionally in AI literacy and workforce upskilling.
- Close gender and access gaps through targeted training.
- Track who benefits — and who is left behind.
The Leadership Test Ahead
I left the conference recognizing that the hype and skepticism of AI can both be true at the same time. That AI capabilities will continue to improve, yet integration will be slower and more complex. And that outcomes – who benefits and how, what harm is done – will be shaped far more by leadership choices than by model releases.
For those of us working at the intersection of impact, business, and policy, the question is not whether to engage with AI. It’s how. The test ahead is whether we can rise to the challenge and steer AI towards society centered outcomes.