Silicon & Soul: The Case for Connection, Trust & Values in AI Design & Deployment
These three pillars form the foundation for AI systems that benefit humanity rather than simply optimize for technical metrics.
Connection: The Human Element in Machine Intelligence
AI systems don't exist in isolation—they mediate human relationships and experiences. Without prioritizing connection:
AI risks becoming a wedge that separates rather than connects us, creating information silos and digital divides
Systems optimize for engagement metrics rather than meaningful human interaction
Technology becomes an end unto itself rather than a tool for enhanced human collaboration
The most successful AI implementations strengthen human bonds, whether by facilitating new connections between like-minded individuals, making expertise more accessible, or augmenting human capabilities in ways that allow for deeper, more meaningful interactions.
Trust: The Currency of AI Adoption
Without trust, even the most sophisticated AI systems will fail to achieve widespread adoption or deliver their promised benefits:
Users must trust that AI systems produce reliable, accurate outputs
Organizations must trust that AI won't create unexpected liabilities or risks
Society must trust that AI development serves collective rather than narrow interests
Trust isn't built through technical specifications alone, but through transparency in how systems work, clarity about their limitations, and accountability when things go wrong. Creating governance frameworks that maintain trust is just as important as the technical architecture of AI systems themselves.
Values: Uncovering the Hidden Influences on LLM Outputs
Every AI system embodies certain values, whether explicitly acknowledged or not:
The data selected reflects what we consider important enough to measure
The optimization targets encode what outcomes we prioritize
The deployment decisions reveal whose needs we prioritize
Your Speakers:
Charis Loveland
Charis Loveland manages the speaker community of Inspire programs, including EPIC Emotional Intelligence at Amazon, where she previously led the artificial intelligence and machine learning (ML) team for AWS Cloud Intelligence. She brings 2 decades of experience in product management, AI, data analytics, new product introduction, file and hardware storage, and software development. After launching a crowdsourcing platform for Azure ML at Microsoft, Charis founded Rue La La's data science team where she created and executed a roadmap and strategy to create personalized experiences and segment customers. Charis co-founded an ML startup and has taught several business courses at General Assembly as well as online courses about data science at MIT, Dartmouth, and Columbia. Charis serves as a coding instructor, mentor, and advocate for early STEM education and volunteers for several nonprofits that help promote greater diversity in the technology industry.
Christopher Lafayette
Christopher Lafayette is an emergent technologist and humanitarian focused on advancing AI, metaverse, medtech, and spatial intelligence technologies with emphasis on inclusion, ethics, and social impact. He founded GatherVerse, a global platform for humanizing technology discussions, HoloPractice, a healthcare-technology incubator, and Hyper Policy, which guides organizations in ethical technology integration.
As a prominent speaker, Lafayette has presented at leading institutions including NIST, Mayo Clinic, Stanford University, Microsoft, and the Linux Foundation. His work spans keynotes and roundtable discussions on technological innovation, consistently advocating for human-centered approaches and ethical advancement in emerging technologies.
Jordan Loewen-Colón
An Adjunct Assistant Professor of AI Ethics and Policy at Queen’s University’s Smith School of Business, Jordan is a recognized leader in Responsible AI. He co-founded the AI Alt Lab, a non-profit dedicated to ensuring AI serves the public good through research, education, and evaluation, partnering with governments (like the State of Utah), tech startups, and health organizations. Drawing on his experience as a former AI Ethics Fellow and CEO of a digital therapeutics startup, Jordan brings a unique practical perspective to complex ethical tech policy. His research into AI ethics, emerging tech, philosophy, and Indigenous data sovereignty informs his upcoming book, "Reality Technologies" (Fortress Press 2025). Jordan’s impactful thought leadership is further amplified through his active content creation with videos and podcasts.
Russell Bundy
Russell Bundy is the Co-Founder and Chief Visionary Officer of Just Verify, a global initiative dedicated to building the trust layer infrastructure for artificial intelligence. Through the AI Verification Initiative, he leads efforts to develop open standards, frameworks, tools, and protocols that ensure AI systems are trustworthy, transparent, accountable, and aligned with human values.
Russell is focused on enabling organizations and individuals to adopt AI confidently, ensuring they can trust what they see, hear, and interact with across digital ecosystems. He also leads the Just Verify Think Tank, a global collective of partners advancing the AI verification movement through collaboration, thought leadership, and real-world implementation.
In addition to his work with Just Verify, Russell serves as Director of Partnerships at GatherVerse, where he bridges innovation and humanity through global collaboration. His multidisciplinary background spans entertainment, e-commerce, blockchain, and Web3, all unified by a commitment to ethical innovation and social impact.
Driven by the belief that technology should serve humanity and should be trustworthy, Russell continues to influence global conversations around AI, trust, and verification, inviting others to co-create a future where emerging technologies enhance human well-being and build a more equitable society.
Networking facilitated by:
Tamara Lechner
Tamara Lechner has spent the past 20 years helping people live happier, more productive, and meaningful lives. As a "people person" in the AI world, she currently chairs the AI for Human Flourishing think tank within the Human Flourishing Program at Harvard. There, she partners with organizations to help them use a data-driven approach to create positive work environments where technology and people work in harmony, driving exceptional results. She is also part of the World Flourishing Organization's leadership team, helping to reach their goal of enabling 1 billion people to flourish by 2035.