Converging Lab
Solving social issues with digitally converging multidisciplinary knowledge is at the core of our research and development efforts
We leverage insights from a diverse range of fields, merging inputs from the humanities and social sciences with expertise in digital technology.

In today's landscape, where digital technology is an indispensable part of our lives, society is continually evolving, giving rise to new values. The overarching goals include sustainable societal development, maintaining fairness, and addressing the diverse interests of people. These challenges necessitate comprehensive solutions from various perspectives. However, the increasing specialization within specific fields of study and research poses a counteractive force.
Despite the accumulation of vast expertise in each field, the challenge lies in applying a broader focus – finding a balance between delving deeply into one field while taking others into account. In an increasingly diverse society, research and development confined to a single field are simply no longer appropriate.

Fujitsu's Converging Technologies Laboratory (CT Lab) recognizes the need for a fresh approach that integrates multiple disciplines. Consequently, we are actively engaged in research and development on Converging Technologies, which amalgamate knowledge from various fields, including natural science, social science, and humanities.

Fujitsu Research of America has actively collaborated with University including Carnegie Mellon University (CMU) in Pittsburgh, PA, standing out as a pioneering partner since 2017.
Over the years, our joint research endeavors have garnered numerous accolades in top academic conferences worldwide. The technologies developed through this collaboration are not only recognized in academic circles but are also being applied to real-world scenarios on a global scale through the Fujitsu extensive business presence.
Researchers in the Converging Lab
Publications
2024
- Enhanced Product Classification Using Learned Prompt Ensembling and Dual Interpolation with CLIP-Based Model- Takahisa Yamamoto (Fujitsu Research of America)*; Koichiro Niinuma (FUJITSU RESEARCH OF AMERICA); Laszlo A Jeni (Carnegie Mellon University)
- Enhancing Multi-Class Mesoscopic Network Modeling with High-Resolution Satellite Imagery- Jiachao Liu (Carnegie Mellon University) *; Pablo Guarda (Fujitsu Research of America)*; Sean Qian (Carnegie Mellon University); Koichiro Niinuma (Fujitsu Research of America)
- Video Question Answering with Procedural Programs- Rohan Choudhury, Koichiro Niinuma, Kris M. Kitani, and László A. Jeni
- Cogs: Controllable gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024- Heng Yu, Joel Julin, Zoltán Á. Milacski, Koichiro Niinuma, and László A. Jeni.
- Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation in Deep Feature Space. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024 - Pedro H. V. Valois, Koichiro Niinuma, Kazuhiro Fukui
- MIDAS: Mixing Ambiguous Data with Soft Labels for Dynamic Facial Expression Recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024 - Ryosuke Kawamura, Hideaki Hayashi, Noriko Takemura, and Hajime Nagahara
2023
- DyLiN: Making Light Field Networks Dynamic. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023- Heng Yu, Joel Julin, Zoltán Á. Milacski, Koichiro Niinuma, and László A. Jeni.
- Confies: Controllable neural face avatars. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2023- Heng Yu, Koichiro Niinuma, László A Jeni
- Visually explaining 3D-CNN predictions for video classification with an adaptive occlusion sensitivity analysis. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023- Tomoki Uchiyama, Naoya Sogi, Koichiro Niinuma, Kazuhiro Fukui
2022
- Facial expression manipulation for personalized facial action estimation. In: Frontiers in Signal Processing, 2022- Koichiro Niinuma, Itir Onal Ertugrul, Jeffrey F Cohn, László A Jeni
- Uncertainty Prediction for Facial Action Units Recognition under Degraded Conditions. In In Proceedings of the International Conference on Machine Learning and Applications (ICMLA), 2022. - Junya Saito, Sachihiro Youoku, Ryosuke Kawamura, Akiyoshi Uchida, Kentaro Murase, and Xiaoyu Mi
2021
- Systematic evaluation of design choices for deep facial action coding across pose. In Frontiers in Computer Science, 2021 - Koichiro Niinuma, Itir Onal Ertugrul, Jeffrey F Cohn, László A Jeni
- Synthetic expressions are better than real for learning to detect facial actions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021 - Koichiro Niinuma, Itir Onal Ertugrul, Jeffrey F Cohn, László A Jeni
- Facial action unit detection based on teacher-student learning framework for partially occluded facial images.In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2021. - Ryosuke Kawamura,, and Kentaro Murase
- Detecting drowsy learners at the wheel of e-learning platforms with multimodal learning analytics. IEEE Access 9 (2021) - Ryosuke Kawamura, Shizuka Shirai, Noriko Takemura, Mehrasa Alizadeh, Mutlu Cukurova, Haruo Takemura, and Hajime Nagahara.
- Facial Action Unit Recognition Using Pseudo-Intensities and their Transformation. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2021.- Junya Saito, Takahisa Yamamoto, Akiyoshi Uchida, Xiaoyu Mi, Kentaro Murase
- Image emotion recognition using visual and semantic features reflecting emotional and similar objects. IEICE TRANSACTIONS on Information and Systems (2021).- Takahisa Yamamoto, Shiki Takeuchi, Atsushi Nakazawa
- Smart Image Inspection using Defect-Removing Autoencoder, Procedia CIRP Volume 104, 2021, Pages 559-564- Yusuke Hida, Savvas Makariou, Sachio Kobayashi
2020
- Detecting learner drowsiness based on facial expressions and head movements in online courses. In Companion Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI), 2020. - Shogo Terai, Shizuka Shirai, Mehrasa Alizadeh, Ryosuke Kawamura, Noriko Takemura, Yuki Uranishi, Haruo Takemura, and Hajime Nagahara.
- Concentration estimation in e-learning based on learner's facial reaction to teacher's action. In Companion Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI), 2020. - Ryosuke Kawamura and Kentaro Murase
- Estimation of wakefulness in video-based lectures based on multimodal data fusion. In Adjunct Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers, 2020. - Ryosuke Kawamura, Shizuka Shirai, Mehrasa Aizadeh, Noriko Takemura, and Hajime Nagahara