Bodily Expressed Emotion
Understanding (BEEU) Challenge
4 Challenge Tracks Open until January 10th, 2026We are thrilled to announce the launch of the BEEU Challenge 2025, featuring a new emotion benchmark and competition track dedicated to advancing the understanding and modeling of bodily-expressed emotions in videos.
Overview
The challenge will be based on the Annotated Bodily Expressed Emotion (ABEE) dataset.
This dataset contains ~3,200 short video clips depicting naturalistic bodily expressions of emotion, spanning 8 main emotion categories (e.g., happiness, love, surprise, anger, fear, sadness, embarrassment, disgust), and 20 sub-categories of emotion.
There will be multiple tracks within the challenge, and all participants are strongly encouraged to submit papers detailing their use of the dataset to BEEU 2026.
The following are the main categories and sub-categories of emotion:
- Happiness: Playful, Eager/Seeking, Contentment, Pride
- Love: Affection/Care, Romantic or Sexual Attraction
- Surprise: Delightful Surprise, Fearful Surprise
- Anger: Hostility, Annoyance, Frustration
- Sadness: Dismay, Grief, Suffering
- Fear: Panic, Terror
- Disgust/Aversion: Revulsion, Boredom
- Embarrassment: Shame, Guilt
Along with this, we will also provide the Valence, Arousal, and Dominance ratings for the video clips. These ratings all range between 1-9. 1 denotes a low level of Valence (or Arousal/Dominance) in the emotion being expressed by the main human subject in the video clip, and 9 denotes a high level of the same.
Of these, 2,200 clips are provided with ground-truth annotations, where each clip may be associated with multiple emotion labels: reflecting the richness and co-occurrence of affective states in real-world behavior. 1000 videos will also be provided, which form part of the test set.
Challenge Tasks
Participants are invited to compete in one or more of the following tasks:
1. Multi-Label Emotion Prediction
Predict one or more emotion categories associated with each video clip.
Goal: Develop models capable of handling overlapping or blended emotions expressed through bodily cues.
Evaluation Criteria: Evaluation will use simple set-based metrics (intersection over union). (There will be bonus points even for misclassified samples, based on emotion distance). Evaluation will be performed only on the test set.
Required Response Format: The response submitted by the participant should include the video ID (for videos in the test set), and the list of predicted emotion words, where the set of emotions covers both the main categories and the sub-categories. This should ideally be in a CSV format, under the column names of id and prediction.
2. Emotional Explanation Generation
Generate textual explanations describing why a particular emotion is inferred to be expressed in the clip.
Goal: Encourage interpretability and reasoning in emotion recognition systems.
Evaluation Criteria: We will conduct curated human studies to evaluate the generated explanation. Further, we will use standard n-gram metrics like BLEU and ROUGE to evaluate the quality of explanations.
Required Response Format: For each label in the ground truth label set of the video, an explanation should be generated. The submission can be a JSON-like file, where the video ID is present, and emotion labels in the ground truth set serve as keys, and the generated explanations serve as values.
3. Valence–Arousal–Dominance (VAD) Prediction
Predict continuous valence, arousal, and dominance scores for each video.
Goal: Capture the underlying affective dimensions that complement categorical emotion representations.
Evaluation Criteria: We will use the R^2 metric for this regression task. Evaluation will be only on the test set.
Required Response Format: Similar to Task 1, the submission should include the video IDs (under column id), and along with that, include 3 other columns: valence, arousal, dominance with the predicted values of each.
4. Open-Ended Usage Track
In this track, participants are encouraged to submit any material (video, demo of created system) that demonstrates creative usage of our bodily expressed emotion dataset, beyond the requirements of the standard tasks listed above. This could include application of the dataset on any downstream task beyond emotion recognition, using it to create artistic or interactive systems, etc. The submission format will support upload of a video (for a demo or similar), and a PDF file detailing the method. Paper submissions on this are also highly encouraged (in which case the report can be omitted).
A public leaderboard will be hosted, and top-performing submissions will be invited to present their work at the workshop.
Participation
Interested in the challenge? To get started, please fill out the form linked below.
An invite link to the ABEE dataset will be sent to your email upon successful completion of this form.
Awards & Recognition
Outstanding teams will be recognized at the workshop, with best-paper and best-performance awards across each track. Top performing teams/submissions will automatically be invited for presentations at the conference.
Important Dates
ABEE Dataset Release: October 10th, 2025
Deadline for Paper Submissions: November 4th, 2025
Notifications sent to Authors: November 7th, 2025
Challenge Leaderboard Closes: December 31st, 2025
Solution Submission Deadline for All Tracks: January 10th, 2026
Workshop Presentations: January 26th, 2026
* Note that the paper submission deadline and the result submission for the track have different deadlines. We will allow submission for the challenge until the workshop to encourage iterative refinement of the predictions made. If the corresponding submitted paper is accepted, it may be possible to also present an updated version of the same.
Organizing Committee
Distinguished Professor, Penn State (Workshop Chair)
PhD Student, Penn State
Director, The RAD Lab
Professor, Penn State
Associate Professor, University of Illinois Chicago
Other Contributors
PhD Student, Penn State
Undergrad Student, Penn State
Independent Contributor
© The 2nd International Workshop on Bodily Expressed Emotion Understanding (BEEU, 2026)