Publications

(For a full list of the PIs publications go to Google Scholar)

Full List

  1. Tran, M., Kim, Y., Su, C.-C., Sun, M., Kuo, C.-H., & Soleymani, M. (2024). Exo2Ego-centric Self-supervised Learning for Social Role Understanding. European Conference on Computer Vision (ECCV) 2024.
  2. Tran, M., Chang, D., Siniukov, M., & Soleymani, M. (2024). Dyadic Interaction Modeling for Social Behavior Generation. European Conference on Computer Vision (ECCV) 2024.
  3. Chang, D., Shi, Y., Gao, Q., Xu, H., Fu, J., Song, G., Yan, Q., Zhu, Y., Yang, X., & Soleymani, M. (2024). MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion. Forty-First International Conference on Machine Learning. https://openreview.net/forum?id=jVXJdGQ4eD
  4. Shi, Z., O’Connell, A., Li, Z., Liu, S., Ayissi, J., Hoffman, G., Soleymani, M., & Matarić, M. J. (2024). Build Your Own Robot Friend: An Open-Source Learning Module for Accessible and Engaging AI Education. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23137–23145. https://doi.org/10.1609/aaai.v38i21.30359
  5. Bohy, H., Tran, M., El Haddad, K., Dutoit, T., & Soleymani, M. (2024). Social-MAE: A Transformer-Based Multimodal Autoencoder for Face and Voice. 2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG), 1–5. https://doi.org/10.1109/FG59268.2024.10581940
  6. Tavabi, L., Tran, T., Borsari, B., Delacruz, J., Woolley, J. D., Scherer, S., & Soleymani, M. (2023). Therapist Empathy Assessment in Motivational Interviews. 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII), 1–8. https://doi.org/10.1109/ACII59096.2023.10388176
  7. Tran, T., Yin, Y., Tavabi, L., Delacruz, J., Borsari, B., Woolley, J. D., Scherer, S., & Soleymani, M. (2023). Multimodal Analysis and Assessment of Therapist Empathy in Motivational Interviews. Proceedings of the 25th International Conference on Multimodal Interaction, 406–415. https://doi.org/10.1145/3577190.3614105
  8. Tran, M., Yin, Y., & Soleymani, M. (2023). Personalized Adaptation with Pre-trained Speech Encoders for Continuous Emotion Recognition. Proc. INTERSPEECH 2023, 636–640. https://doi.org/10.21437/Interspeech.2023-2170
  9. Tran, M., & Soleymani, M. (2023). Privacy-preserving Representation Learning for Speech Understanding. Proc. INTERSPEECH 2023, 2858–2862. https://doi.org/10.21437/Interspeech.2023-2138
  10. Tran, M., & Soleymani, M. (2023). A Speech Representation Anonymization Framework via Selective Noise Perturbation. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. https://doi.org/10.1109/ICASSP49357.2023.10095173
  11. Yin, Y., Xu, J., Zu, T., & Soleymani, M. (2022). X-Norm: Exchanging Normalization Parameters for Bimodal Fusion. Proceedings of the 2022 International Conference on Multimodal Interaction, 605–614. https://doi.org/10.1145/3536221.3556581
  12. Zhang, L., Kolacz, J., Rizzo, A., Scherer, S., & Soleymani, M. (2022). Speech Behavioral Markers Align on Symptom Factors in Psychological Distress. 2022 10th International Conference on Affective Computing and Intelligent Interaction (ACII), 1–8. https://doi.org/10.1109/ACII55700.2022.9953849
  13. Zhu, H., Zheng, Z., Soleymani, M., & Nevatia, R. (2022). Self-Supervised Learning for Sentiment Analysis via Image-Text Matching. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1710–1714. https://doi.org/10.1109/ICASSP43922.2022.9747819
  14. Tran, M., & Soleymani, M. (2022). A Pre-Trained Audio-Visual Transformer for Emotion Recognition. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4698–4702. https://doi.org/10.1109/ICASSP43922.2022.9747278
  15. Tran, M., Bradley, E., Matvey, M., Woolley, J., & Soleymani, M. (2021). Modeling Dynamics of Facial Behavior for Mental Health Assessment. 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), 1–5. https://doi.org/10.1109/FG52635.2021.9666955
  16. Yin, Y., Lu, L., Xiao, Y., Xu, Z., Cai, K., Jiang, H., Gratch, J., & Soleymani, M. (2021). Contrastive Learning for Domain Transfer in Cross-Corpus Emotion Recognition. 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII), 1–8. https://doi.org/10.1109/ACII52823.2021.9597453
  17. Kontogiorgos, D., Tran, M., Gustafson, J., & Soleymani, M. (2021). A Systematic Cross-Corpus Analysis of Human Reactions to Robot Conversational Failures. Proceedings of the 2021 International Conference on Multimodal Interaction, 112–120. https://doi.org/10.1145/3462244.3479887
  18. He, Z., Tavabi, L., Lerman, K., & Soleymani, M. (2021). Speaker Turn Modeling for Dialogue Act Classification. Findings of the Association for Computational Linguistics: EMNLP 2021, 2150–2157. https://doi.org/10.18653/v1/2021.findings-emnlp.185
  19. Cheng, J., Fostiropoulos, I., Boehm, B., & Soleymani, M. (2021). Multimodal Phased Transformer for Sentiment Analysis. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2447–2458. https://doi.org/10.18653/v1/2021.emnlp-main.189
  20. Yin, Y., Lu, L., Wu, Y., & Soleymani, M. (2021). Self-Supervised Patch Localization for Cross-Domain Facial Action Unit Detection. 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), 1–8. https://doi.org/10.1109/FG52635.2021.9667048
  21. Tran, M., Bradley, E., Matvey, M., Woolley, J., & Soleymani, M. (2021). Modeling Dynamics of Facial Behavior for Mental Health Assessment. 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), 1–5. https://doi.org/10.1109/FG52635.2021.9666955
  22. Khurana, V., Gahalawat, M., Kumar, P., Roy, P. P., Dogra, D. P., Scheme, E., & Soleymani, M. (2021). A Survey on Neuromarketing using EEG Signals. IEEE Transactions on Cognitive and Developmental Systems.
  23. Rayatdoost, S., Yin, Y., Rudrauf, D., & Soleymani, M. (2021). Subject-Invariant EEG Representation Learning For Emotion Recognition. ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3955–3959.
  24. Tavabi, L., Tran, T., Stefanov, K., Borsari, B., Woolley, J., Scherer, S., & Soleymani, M. (2021). Analysis of Behavior Classification in Motivational Interviewing. Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access, 110–115.
  25. Rayatdoost, S., Rudrauf, D., & Soleymani, M. (2020). Expression-guided EEG representation learning for emotion recognition. ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3222–3226.
  26. Klein, L., Ardulov, V., Hu, Y., Soleymani, M., Gharib, A., Thompson, B., Levitt, P., & Matarić, M. J. (2020). Incorporating Measures of Intermodal Coordination in Automated Analysis of Infant-Mother Interaction. Proceedings of the 2020 International Conference on Multimodal Interaction, 287–295.
  27. Tavabi, L., Poon, A., Rizzo, A. S., & Soleymani, M. (2020). Computer-Based PTSD Assessment in VR Exposure Therapy. International Conference on Human-Computer Interaction, 440–449.
  28. Yan, S., Huang, D., & Soleymani, M. (2020). Mitigating biases in multimodal personality assessment. Proceedings of the 2020 International Conference on Multimodal Interaction, 361–369.
  29. Yin, Y., Huang, B., Wu, Y., & Soleymani, M. (2020). Speaker-invariant adversarial domain adaptation for emotion recognition. Proceedings of the 2020 International Conference on Multimodal Interaction, 481–490.
  30. Rayatdoost, S., Rudrauf, D., & Soleymani, M. (2020). Multimodal gated information fusion for emotion recognition from EEG signals and facial behaviors. Proceedings of the 2020 International Conference on Multimodal Interaction, 655–659.
  31. Tavabi, L., Stefanov, K., Zhang, L., Borsari, B., Woolley, J. D., Scherer, S., & Soleymani, M. (2020). Multimodal Automatic Coding of Client Behavior in Motivational Interviewing. Proceedings of the 2020 International Conference on Multimodal Interaction, 406–413.
  32. Stefanov, K., Huang, B., Li, Z., & Soleymani, M. (2020). OpenSense: A Platform for Multimodal Data Acquisition and Behavior Perception. Proceedings of the 2020 International Conference on Multimodal Interaction, 660–664.
  33. Choube, A., & Soleymani, M. (2020). Punchline Detection using Context-Aware Hierarchical Multimodal Fusion. Proceedings of the 2020 International Conference on Multimodal Interaction, 675–679.
  34. Tran, M., Zhang, Y., & Soleymani, M. (2020). Towards a friendly online community: An unsupervised style transfer framework for profanity redaction. COLING.
  35. Lu, L., Tavabi, L., & Soleymani, M. (2020). Self-Supervised Learning for Facial Action Unit Recognition through Temporal Consistency. British Machine Vision Conference (BMVC).
  36. Zhao, S., Wang, S., Soleymani, M., Joshi, D., & Ji, Q. (2019). Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey. ACM Trans. Multimedia Comput. Commun. Appl., 15(3s). https://doi.org/10.1145/3363560
  37. Ringeval, F., Schuller, B., Valstar, M., Cummins, N., Cowie, R., Tavabi, L., Schmitt, M., Alisamir, S., Amiriparian, S., Messner, E.-M., Song, S., Liu, S., Zhao, Z., Mallol-Ragolta, A., Ren, Z., Soleymani, M. S., & Pantic, M. (2019). AVEC 2019 Workshop and Challenge: State-of-Mind, Detecting Depression with AI, and Cross-Cultural Affect Recognition. Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop, 3–12. https://doi.org/10.1145/3347320.3357688
  38. Soleymani, M., Stefanov, K., Kang, S.-H., Ondras, J., & Gratch, J. (2019). Multimodal Analysis and Estimation of Intimate Self-Disclosure. 2019 International Conference on Multimodal Interaction, 59–68. https://doi.org/10.1145/3340555.3353737
  39. Tavabi, L., Stefanov, K., Nasihati Gilani, S., Traum, D., & Soleymani, M. (2019). Multimodal Learning for Identifying Opportunities for Empathetic Responses. 2019 International Conference on Multimodal Interaction, 95–104. https://doi.org/10.1145/3340555.3353750
  40. Song, Y., & Soleymani, M. (2019, June). Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  41. Alameda-Pineda, X., Redi, M., Soleymani, M., Sebe, N., Chang, S.-F., & Gosling, S. (2019). Special Section on Multimodal Understanding of Social, Affective, and Subjective Attributes. ACM Trans. Multimedia Comput. Commun. Appl., 15(1s), 11:1–11:3. https://doi.org/10.1145/3292061
  42. Rayatdoost, S., & Soleymani, M. (2018). CROSS-CORPUS EEG-BASED EMOTION RECOGNITION. 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), 1–6. https://doi.org/10.1109/MLSP.2018.8517037
  43. Aljanaki, A., & Soleymani, M. (2018). A Data-driven Approach to Mid-level Perceptual Musical Feature Modeling. Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018, Paris, France, September 23-27, 2018, 615–621. http://ismir2018.ircam.fr/doc/pdfs/183_Paper.pdf
  44. Soleymani, M., & Mortillaro, M. (2018). Behavioral and Physiological Responses to Visual Interest and Appraisals: Multimodal Analysis and Automatic Recognition. Frontiers in ICT, 5, 17. https://doi.org/10.3389/fict.2018.00017