C3AI Detailed Workshop Program
C3AI Detailed Workshop Program
Half-Day Workshop held in the afternoon
1:00 - 1:10 (10 mins) Introduction
1:10 - 1:40 (20 mins + QA) Invited keynote:
Marina Escobar Planas, recently earned her PhD in "Trustworthy AI" from the Polytechnic University of Valencia. Her research explores the distinction between trustworthy AI and users' subjective trust, with a particular focus on helping children understand the limitations of AI and manage their expectations. Notably, her work on "de-trusting" AI in specific contexts offers valuable insights into fostering trust, transparency, and ethical interactions in AI systems for young audiences.
1:40 - 2:00 (20mins) Speed networking
2:00 - 3:00 (60 mins) Hands-on Activity - Human Factors brainstorming:
2:00- 2:05 Submission 7 presentation
2:05 – 2:07 Q&A for Submission 7
ABSTRACT Although children today have increasing access to AI technologies, none of the widely available tools (e.g., ChatGPT, DeepSeek) are explicitly designed with children in mind. These systems generate responses in a fluent, human-like tone by remixing information from multiple sources without clear citation, which can obscure potential biases or inaccuracies. Given their growing role as information providers in children's daily lives, it is crucial to understand how children interpret, evaluate, and trust the content generated by AI. While prior research has shown that elementary school-aged children can discern the accuracy of information, less is known about how their baseline beliefs and conceptual models of AI shape these judgments. In particular, it remains unclear how misconceptions about AI, either overly optimistic or skeptical, may lead to undue trust or unwarranted dismissal of AI-generated information. To address this gap, this study examines how children’s general understandings of AI shape their trust in a novel conversational AI. To investigate these dynamics, we conducted semi-structured interviews with children aged 8–10 before and after their two-week interaction with Curio the Catbot, a child-friendly conversational AI that answers any question children ask. Children expressed strong, often polarized views about general AI tools—some viewed them as always correct and more intelligent than humans, while others doubted them entirely, citing the fallibility of their human developers. However, their explanations of how AI works were consistently partially inaccurate, reflecting common misconceptions, such as the belief that AI searches the internet in real-time or is updated by humans with new information. We found that these initial beliefs seemed to still influence how children evaluated information provided by a novel technology. Those with overly optimistic views tended to trust the information without much reflection, while those with skeptical attitudes were quick to dismiss it, regardless of its accuracy. Parental perspectives also shaped these attitudes, often reinforcing children's distorted understanding. These findings suggest that children’s misconceptions about AI, not simply their experiences with it, are a key driver of their (mis)trust in AI-generated content. Based on these preliminary findings, we suggest design guidelines that could promote children’s critical engagement with AI: (1) integrating age-appropriate explanations of both the advantages and limitations of AI—clarifying how it generates responses, when it is helpful, and when it may err, (2) embedding in-situ reflective prompts that help children evaluate the reliability of information, and (3) supporting shared reasoning with caregivers to model healthy skepticism. By fostering AI literacy through thoughtful design, we can better equip children to approach AI-generated information with a critical lens.
2:07 – 2:12 Submission 3 presentation
2:12 – 2:14 Q&A for Submission 3
ABSTRACT As digital interactions become increasingly common, chat systems are widely used in education, entertainment, and customer service. With 53% of customer service users preferring to engage with companies through chat services, digital chat queues have become an important aspect of online communication. However, these queues often lead to frustration and disengagement, as users must wait for assistance. To address this, companies have experimented with time fillers and estimated waiting times, but maintaining user engagement remains a challenge. This issue is urgent in settings where user retention is critical, such as in mental health and support services for children. This research effort aims to understand and enhance the queuing experience for children in digital chats by exploring the key design and technical requirements for developing AI-driven interactive features that can adapt to and sustain children’s engagement through storytelling. It examines how these features impact satisfaction, engagement, and perceived wait times, ultimately improving the queuing experience. We would like to analyze our initial CAI cases within the workshop to gain a deeper understanding of trustworthy CAI design for future design iterations.
2:14 – 2:19 Submission 5 presentation
2:19 – 2:21 Q&A for Submission 5
ABSTRACT This study presents an emotionally responsive AI avatar developed as a digital teaching assistant to support natural interaction in educational settings. The system integrates speech processing, facial animation, a large language model, and emotion detection that adapts responses to users’ emotional states. In-class tests were conducted in Swedish junior high schools, each involving 19 students in small groups, interacting with the assistant in English on Social Studies and AI topics. Teachers facilitated the sessions to align with the curriculum. Survey results showed generally positive student responses, with strong interest in continued use demonstrating the potential of emotionally adaptive AI in classrooms.
2:21-3:00 – Group activity
Coffee break 3:00 - 3:30(30 mins)
Group activity - Child-AI Interaction scenarios 3:30 - 4:40 (70 mins):
3:30 - 3:35 Submission 4 presentation
3:35 – 3:37 Q&A for Submission 4
ABSTRACT Alignment in large language models have been categorized into inner and outer alignments. Outer alignment aligns goals of LLMs to human values, which includes beliefs, desirable goals and guiding principle standards. This is seen in terms of three dimensions - helpfulness, honesty and harmlessness. Whereas the goal of inner alignment is to make sure an AI system robustly fulfills the given objective that aligns to what humans expects it to do.
In addition provides us with a detailed evaluation framework of alignment under seven major categories -reliability, safety, fairness and bias, resistance to misuse, interpretability, goodwill, and robustness. Each of these category play an important value. While these are extremely crucial requirements , these frameworks are typically designed for adults, abstracting away the unique needs of specific user population such as children/teenagers. What these values mean and what is expected as aligning to these values differ for adults and children. However, these comprehensive frameworks could serve as a baseline to build child-centered alignment and evaluation practices.
3:37 - 3:42 Submission 6 presentation
3:42 – 3:44 Q&A for Submission 6
ABSTRACT Tangible interaction has been studied in the area of Child Computer Interaction (CCI) for supporting different goals for children, such as learning of programming and storytelling. Examples of the latter include conversational agents embedded in artefacts and aware of children’s interactions with physical items [1]. Recently,
storytelling in CCI has witnessed a growing interest in leveraging Artificial Intelligence (AI). For example, robots use some limited form of AI to enact a given event in a narrative through motion to elicit a response in their listener [2]. Co-creative storytelling, instead, leverages Child Conversational Artificial Intelligence (CAI) to allow children to actively interact with the narrative through natural language (e.g., by answering questions posed by a CAI system) [4], while, however, not allowing a relevant role for physical actions. Lately, the advent of Generative Artificial Intelligence (GAI) has opened up new research possibilities in the area of CAI and tangible storytelling. Our contribution addresses this research context. It aims at studying and integrating the concept of narrating by doing [3] in the current Child-CAI research for tangible storytelling. A narrative can be loosely defined as a text containing a set of elements put in relation to one another. Therefore, storytelling can be defined the act of a narrator guiding a listener through a narrative. As previous research has argued, storytelling is a fundamental activity in the development of human cognition, especially for children, since it conveys meaning in an accessible and relatable way, while keeping a rigorous logical structure. Narrating by doing aims to bridge theory and practice in the development of tangible storytelling. It leverages physical and dialogue-based interaction for reasoning, pursuing embodied cognition. Considering this concept as framed by Sylla et al. [3], tangibles can play several roles in the narration, e.g., they can represent building blocks of a narrative. In this manner, they enable different forms of physical interaction, e.g., shaking, or sorting. Also, storytelling with tangibles allow children to play two roles in the storytelling process, passive (listener) or active in shaping the narration itself (narrators). In the active setting, tangible storytelling stimulates embodied cognition.
We argue that tangible storytelling, and in particular the concept of narrating by doing, can pave new paths in Child-CAI co-creative storytelling, simultaneously enabling children to explore narratives through physical actions and allowing them to actively shape the narration as (co-)narrators, to better scaffold their (embodied) cognitive development.
3:44 - 3:49 Submission 8 presentation
3:49 – 3:51 Q&A for Submission 8
ABSTRACT Nowadays, Artificial Intelligence (AI) is embedded in several products that children use on a daily basis,and these systems can impact children's well-being and lives to varying degrees. Since opportunities also come with risks and concerns, it is crucial that AI systems are designed, developed, and deployed in accordance with both ethical considerations and international human rights standards. Therefore, it is important to identify existing technical and practical guidelines for practitioners working on those systems to enable them to work on products and services that are safe, reliable, and compliant with children's rights standards and obligations.
3:51 - 3:56 Submission 9 presentation
3:56 – 3:58 Q&A for Submission 9
ABSTRACT This study investigates the influence of game-based learning (GBL) on improving children’s cognitive abilities and motivation within educational environments. A total of 196 participants were involved, with 98 students from a conventional teaching system and 98 from a game-based learning environment. The results indicate that students using GBL with conversational AI (Chatgpt) demonstrated higher accuracy and motivation levels than those in the conventional system. The participants were further classified according to Piaget’s framework, a theory of cognitive development, into two stages: pre-operational (ages 2 to 6) and concrete operational (ages 7 to 11). A notable difference in motivation was observed between the two stages, with children in the concrete operational stage exhibiting higher motivation levels. However, no significant differences in task accuracy was identified between the two stages.
3:58 – 4:40 Group activity
4:40- 5:00 (20 mins) Wrap up & Conclusion
IDC25 Interaction Design and Children 2025