AI Impact Lab

We study how people and AI shape each other: how consumers, employees, and citizens experience AI systems (ConsumerX), how speech and language reveal deeper psychological states in human–AI interaction (VoiceX), how to design AI technologies that are more effective, intuitive, and trustworthy (DesignX), and how to build the tools and methods that make behavioral AI research robust and transparent at scale (ToolX).

Our research addresses some of the most consequential questions at the intersection of AI and human behavior — from autonomous shopping agents that act on consumers’ behalf, to large language models reshaping financial advisory and mental healthcare, to workplace AI transforming how people make decisions and collaborate. This work is conducted with and supported by leading organizations across consumer electronics (Logitech), financial services (Swiss Re, UBS, Julius Bär), healthcare (Roche Diagnostics), and the automotive industry (Audi, VW, Skoda, Porsche).

 

ResearchGate

Our research creates impact across six domains where AI is fundamentally reshaping how people interact with AI agents, organizations, and each other.

1. Autonomous Commerce & Agentic AI

How consumers experience, trust, and delegate decisions to autonomous shopping agents, AI-powered marketplaces, and personalized recommendation systems.

2. Financial Services & Advisory

How conversational design, voice modality, and interface architecture shape trust, risk perception, and advisory outcomes in wealth management, insurance, and banking.

3. Healthcare & Wellbeing

How AI companions, therapy bots, and diagnostic assistants affect emotional wellbeing, and social connection,

4. Workplace & Productivity

How AI augments — and sometimes undermines — human judgment, collaboration, and creativity in professional workplace settings.

5. Automotive & Industrial Automation

How the physical embodiment and movement patterns of AI systems shape human comfort, safety, and collaboration in manufacturing settings.

6. Digital Platforms & Social Media

Scalable tools and methods for in-context experimentation and behavioral measurement on algorithmically mediated platforms.
 

Research Bites

MindMiner: Uncovering Linguistic Markers of Mind Perception as a New Lens to Understand Consumer–Smart Object Relationships

Hartmann, J., Bergner, A., Hildebrand, C. (2023): MindMiner: Uncovering Linguistic Markers of Mind Perception as a New Lens to Understand Consumer–Smart Object Relationships, Journal of Consumer Psychology, Vol. 33(4), pp. 645–667.

Real-Time Adaptive Industrial Robots: Improving Safety and Comfort in Human-Robot Collaboration

Hostettler, D., Mayer, S., Albert, J. L., Jenss, K. E., Hildebrand, C. (2025): Real-Time Adaptive Industrial Robots: Improving Safety and Comfort in Human-Robot Collaboration, ACM CHI Conference on Human Factors in Computing Systems, Forthcoming.

Biased Echoes: Generative AI Models Reinforce Investment Biases and Increase Portfolio Risks of Private Investors

Winder, P., Hildebrand, C., Hartmann, J. (2025): Biased Echoes: Generative AI Models Reinforce Investment Biases and Increase Portfolio Risks of Private Investors, PLoS ONE, Forthcoming.

Haptic Rewards: How Mobile Vibrations Shape Reward Response and Consumer Choice

Hampton, W., Hildebrand, C. (2025): Haptic Rewards: How Mobile Vibrations Shape Reward Response and Consumer Choice, Journal of Consumer Research, Forthcoming.

How Artificial Intelligence Constrains the Human Experience

Valenzuela, A., Puntoni, S., Hoffman, D., Castelo, N., De Freitas, J., Dietvorst, B., Hildebrand, C., et al. (2024): How Artificial Intelligence Constrains the Human Experience, Journal of the Association for Consumer Research, Vol. 9(3), pp. 241–256.

Research Pillars

Pillar 1 — ConsumerX

We study how AI systems — from conversational agents and robo-advisors to autonomous shopping agents and AI companions — shape the emotional, cognitive, and behavioral dynamics of the consumer experience. As AI increasingly acts on behalf of consumers, makes recommendations in high-stakes domains, and serves as a social presence, understanding these interactions becomes critical for businesses, policymakers, and society.

Key themes:

  • Autonomous and agentic AI (e.g., shopping agents)
  • AI companions and social AI
  • Effective design of conversational interfaces across consumer touchpoints
  • Voice vs. text modality effects on perception and interaction outcomes
  • Haptic feedback and embodied interaction of devices and user interfaces

Applications:

  • Autonomous commerce and AI shopping agents
  • Retail and customer service automation (AI customer service agents)
  • Sales automation & conversion (AI sales agents)
  • Financial services (robo-advisors, conversational financial planning, LLMs in wealth management)
  • Healthcare and wellbeing (therapy bots, AI companions, mental health applications)

Representative work:

  • Machine Talk: How voice design shapes brand relationships (JCR)
  • Understanding consumer reactions service bots (JCR)
  • Conversational robo-advisors as trust surrogates (JAMS)
  • Voice bots frontline (JAMS)
  • Haptic rewards (JCR)

Pillar 2 — VoiceX

As large language models power increasingly sophisticated conversational agents in sensitive domains — from financial advisory to mental health support — understanding and designing the voice and language layer of these systems becomes essential.

Key themes:

  • Vocal cues as a behavioral signal (acoustic feature extraction for psychological state & trait detection)
  • Voice design & engineering (e.g., AI voice personalities, matching voice & task contexts)
  • Voice and conversational analytics (mind perception markers in language, linguistic style detection, conversational flow)

Applications:

  • Prediction models using voice and audio data
  • Synthetic voice design & personalization of AI agents
  • Conversational technologies in market research (data quality & user experience)
  • Financial communication (effective design of conversational robo advisors)

Representative work:

Pillar 3 — DesignX

As AI systems become more autonomous and pervasive in both consumer and workplace settings, designing interactions that are perceived as safe, intuitive, transparent, and emotionally resonant is a central design challenge and focus of our research.

Key themes:

  • Human-Robot Interaction (HRI): Designing the movements and adaptability of industrial and service robots to improve human comfort, safety, and collaboration.
  • Embodied Cognition: How physical interactions (like touch, gesture, and haptics) shape abstract concepts like reward, choice, and product preference.
  • Adaptive, Personalized Interfaces: How the design of digital interfaces such as product configurators and mass customization systems can be adapted and personalized toward the user and the downstream impact on consumer choice.

Applications:

  • Industrial Automation: Creating adaptive robots that are safer and more comfortable for human co-workers.
  • Personalized Product Customization: Building customization interfaces that reduce user stress and improve performance outcomes for consumers and firms (faster with lower drop-outs).
  • Workplace AI: designing human-AI collaboration tools for knowledge workers, managers, and teams
  • Agentic consumer systems: interface design for AI that browses, recommends, negotiates, and purchases on behalf of consumers

Representative work:

Pillar 4 — ToolX

We develop and build open-source tools and infrastructure to measure and understand human behavior with AI technologies at scale and in real-world environments. This includes LLM-powered research applications that bring academic insights directly into the hands of practitioners and consumers. Our work also contributes to meta-science through replication efforts, large-scale meta-analyses, and initiatives that promote transparency, reproducibility, and cumulative evidence generation in behavioral and consumer research.

Key themes:

  • Multi-modal behavioral measurement tools (voice, text, haptics, clickstream data)
  • Digital in-context experimentation (DICE for social media research)
  • Automated feature extraction from voice and text data (voiceR, MindMiner)
  • Living Meta-Analyses, Reproducability, & Open science 

Representative work:

Our Team

Lab Members

Christian Hildebrand

Prof. Dr.

Executive Director

Institute of Behavioral Science & Technology (IBT-HSG)
Büro 64-410
Torstrasse 25
9000 St. Gallen

Bianka Ledermann

Head of Administration

Institute of Behavioral Science & Technology
Büro 64-420
Torstrasse 25
9000 St. Gallen

Philipp Winder

Dr.

Senior Researcher

Institute of Behavioral Science & Technology
Torstrasse 25
9000 St. Gallen

Kaede Johnson

M.Sc.

Researcher & PhD Student

Institute of Behavioral Science & Technology (IBT-HSG)
Torstrasse 25
9000 St. Gallen

Marusa Pintar

M.Sc.

Researcher & PhD Student

Institute of Behavioral Science & Technology
Torstrasse 25
9000 St. Gallen

Zeliha Oğuz-Uğuralp

B.Sc.

Research Assistant

Ahmet Kaan Uğuralp

B.Sc.

Research Assistant

Darshil Shah

M.Sc.

Researcher & PhD Student

Institute of Behavioral Science & Technology
Torstrasse 25
9000 St. Gallen

Alumni

Anouk Bernger

Dr.

University of Geneva, former P&G

William Hampton

Dr. 

Google, former TikTok

Francesc Busquet

Dr.

PwC

Anna Bouwer

Dr.

Bain & Company

Meike Zehnle

Dr.

Digitec Galaxus

Fotis Efthymiou

Dr.

EDGE Empower

 

Hauke Roggenkamp

Dr.

ETH Zürich

Former Visiting PhD Researchers

  • Ilaria Querci (now Assistant Professor, NEOMA Business School)
  • Luigi Monsurrò (now Assistant Professor, University of Modena and Reggio Emilia)
  • Max Beichert (now Assistant Professor, Bocconi University)

Key Collaborators & Project Partners

Academics

  • Jonathan Levav (Stanford)
  • Julian de Freitas (Harvard)
  • Cait Lamberton (Wharton)
  • Noah Castelo (Alberta)
  • Gizem Yalcin (Texas AM)
  • Ana Valenzuela (ESADE)
  • Johannes Boegershausen (RSM)
  • Amir Sepehri (ESSEC)

Companies

  • Logitech
  • Swiss Re
  • Mobiliar
  • Julius Bär
  • UBS
  • Roche
  • Audi
  • Porsche

 

north