LI Xinran
đź‘‹ Welcome!
I am LI Xinran (李欣然), a dual-degree M.S. student at Waseda University (Japan) and Dalian University of Technology (China), specializing in Natural Language Processing (NLP).
My research focuses on Emotion Recognition in Conversation (ERC) and Aspect-Based Sentiment Analysis (ABSA), leveraging large language model techniques and designing Curriculum Learning frameworks for downstream NLP tasks.
I am currently actively applying for Ph.D. positions — please feel free to contact me!
🎓 Education
Waseda University, Graduate School of Information, Production and Systems — Dual M.S. Degree (Expected Mar. 2027)
Research on Aspect-Based Sentiment Analysis (ABSA) with Curriculum Learning and Large Language Models.
Kitakyushu Academic Research City Scholarship Recipient.Dalian University of Technology, School of Software — Dual M.S. Degree (Expected 2027)
Research on Emotion Recognition in Conversation (ERC) using Graph Neural Networks and Improved Curriculum Learning.
Outstanding Master’s Student Award (Top 20/300).Dalian University of Technology, School of Software — B.Eng. in Software Engineering (2020–2024)
Undergraduate research in NLP and medical imaging.
Average Score: 88, GPA: 3.8/4.0
Direct Admission to Master’s Program (Top 17%).
đź§ Research Interests
- Natural Language Processing (NLP)
- Emotion Recognition in Conversation (ERC)
- Aspect-Based Sentiment Analysis (ABSA)
- Large Language Models (LLMs)
- Curriculum Learning (CL)
đź§© Publications
1. Do LLMs Feel? Teaching Emotion Recognition with Prompts, Retrieval, and Curriculum Learning
AAAI 2026 (CORE A*, CCF A) — First Author
Authors: Xinran Li, Yu Liu, Jiaqi Qiao, Xiujuan Xu
We propose PRC-Emo, a new ERC training framework that integrates Prompt engineering, demo Retrieval, and Curriculum learning to investigate whether LLMs can effectively perceive emotions in conversations. PRC-Emo introduces emotion-sensitive prompt templates capturing both explicit and implicit emotional cues, constructs the first ERC-specific demonstration retrieval repository, and incorporates curriculum strategies into LoRA fine-tuning via weighted emotional shifts. Experiments on IEMOCAP and MELD achieve new SOTA results, demonstrating the strong generalizability of our approach.
Paper (arXiv:2511.07061) | Code (GitHub)
2. Long-Short Distance Graph Neural Networks and Improved Curriculum Learning for Emotion Recognition in Conversation
ECAI 2025 (CORE A, CCF B) — First Author
Authors: Xinran Li, Xiujuan Xu, Jiaqi Qiao
Developed a novel LSDGNN + ICL framework combining long- and short-distance GNNs for ERC tasks, improving emotion classification on IEMOCAP and MELD datasets.
Paper (arXiv:2507.15205) | Code (GitHub)
3. A Unified Framework for Emotion Recognition and Sentiment Analysis via Expert-Guided Multimodal Fusion with Large Language Models
Submitted to ICASSP 2026 — Third Author (Second among students)
Proposed an Expert-Guided Multimodal Fusion (EGMF) framework integrating multimodal cues via LLMs and LoRA fine-tuning, achieving superior results on MELD, CHERMA, MOSEI, and SIMS-V2 datasets.
4. CheX-DS: Improving Chest X-ray Image Classification with Ensemble Learning Based on DenseNet and Swin Transformer
IEEE BIBM 2024 (CCF B) — First Author
Authors: Xinran Li, Xiujuan Xu, Yu Liu, Xiaowei Zhao
Designed CheX-DS, combining DenseNet and Swin Transformer for long-tail medical image classification, achieving 83.76% AUC on NIH ChestX-ray14 dataset.
🏅 Honors & Awards
- Kitakyushu Academic Research City Scholarship, Waseda University (2025)
- Outstanding Master’s Student Award, DUT (2025)
- Direct Admission to Master’s Program, DUT (2024)
- Second-Class Scholarship for Academic Excellence, DUT (2021, 2022)
- Top 17% GPA, DUT Undergraduate Program (2020–2024)
🛠️ Skills
Languages & Tools: Python, Java, LaTeX, Git, Jupyter Notebook, PyTorch, HuggingFace
Domains: Graph Neural Networks, Transformers, Curriculum Learning, Large Language Models
đź“« Contact:
Email:
I am currently actively applying for Ph.D. positions — please feel free to contact me if you are interested in research collaboration or supervision opportunities!
