Multimodal Trajectory Prediction Model Competitively Recognized Internationally
10 July 2025, Taipei, Taiwan – Hon Hai Research Institute (HHRI), an R&D powerhouse of Hon Hai Technology Group (Foxconn) (TWSE: 2317), the world’s largest electronics manufacturer and technology service provider, has been recognized for its competitive work in trajectory prediction in autonomous driving technology.
The landmark achievements in ModeSeq, taking top spot in the Waymo Open Dataset Challenge and presenting at CVPR 2025, among the world’s most influential AI and computer vision conferences, gathering top-tier tech firms, research institutions, and academic leaders, highlight HHRI’s growing leadership and technical excellence on the international stage.
“ModeSeq empowers autonomous vehicles with more accurate and diverse predictions of traffic participant behaviors,” said Yung-Hui Li, Director of the Artificial Intelligence Research Center at HHRI. “It directly enhances decision-making safety, reduces computational cost, and introduces unique mode-extrapolation capabilities to dynamically adjust the number of predicted behavior modes based on scenario uncertainty.”
Figure 1: Illustrates the ModeSeq workflow, showing how the model anticipates multiple possible future trajectories (highlighted by red vehicle icons and arrows). It progressively analyzes the scenario and assigns confidence scores (e.g., 0.2) to each potential path.
HHRI’s Artificial Intelligence Research Center, in collaboration with City University of Hong Kong, on June 13, presented "ModeSeq: Taming Sparse Multimodal Motion Prediction with Sequential Mode Modeling" at CVPR 2025(IEEE/CVF Conference on Computer Vision and Pattern Recognition), where its paper was among only the 22% that were accepted.
The multimodal trajectory-prediction technology overcomes the limitations of prior methods by both preserving high performance and delivering diverse potential outcome paths. ModeSeq introduces sequential pattern modeling and employs an Early-Match-Take-All (EMTA) loss function to reinforce multimodal predictions. It encodes scenes using Factorized Transformers and decodes them with a hybrid architecture combining Memory Transformers and dedicated ModeSeq layers.
The research team further refined it into Parallel ModeSeq, which claimed victory in the prestigious Waymo Open Dataset (WOD) Challenge – Interaction Prediction Track at the CVPR WAD Workshop. The team’s winning entry surpassed strong competitors from the National University of Singapore, University of British Columbia, Vector Institute for AI, University of Waterloo and Georgia Institute of Technology.
Building on their success from last year – where ModeSeq placed second globally in the 2024 CVPR Waymo Motion Prediction Challenge – this year’s Parallel ModeSeq emerged triumphant in the 2025 Interaction Prediction track.
Led by Director Li of HHRI’s AI Research Lab, in collaboration with Professor Jianping Wang’s group at City University of Hong Kong and researchers from Carnegie Mellon University, ModeSeq outperforms previous approaches on the Motion Prediction Benchmark—achieving superior mAP and soft-mAP scores while maintaining comparable minADE and minFDE metrics.
About Hon Hai Research Institute
Founded in 2020 under Hon Hai Technology Group, the institute comprises five research centers and one laboratory. Each unit houses high-tech researchers dedicated to forward-looking studies over a 3–7 year horizon. Their mission is to strengthen long-term innovation and product development to support Foxconn’s transformation toward a “Smart First” future and to bolster the company’s “3+3+3” strategic operating model.


_20250613_7620.png)
_20250613_7620.png)

