Chihsheng Jin
Independent Researcher
Chihsheng completed his MS in Computational Linguistics in 2025. While in the FACTS.lab, his research focused on information extraction using large language models.
Website
Projects
Papers
- Walden, William, Pavlo Kuchmiichuk, Alexander Martin, Chihsheng Jin, Angela Cao, Claire Sun, Curisia Allen & Aaron White. 2025. Cross-Document Event-Keyed Summarization. In Hao Fei, Kewei Tu, Yuhui Zhang, Xiang Hu, Wenjuan Han, Zixia Jia, Zilong Zheng, et al. (eds.), Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025), 218–241. Vienna, Austria: Association for Computational Linguistics.
@inproceedings{walden_cross-document_2025,
title = "Cross-Document Event-Keyed Summarization",
author = "Walden, William and
Kuchmiichuk, Pavlo and
Martin, Alexander and
Jin, Chihsheng and
Cao, Angela and
Sun, Claire and
Allen, Curisia and
White, Aaron",
editor = "Fei, Hao and
Tu, Kewei and
Zhang, Yuhui and
Hu, Xiang and
Han, Wenjuan and
Jia, Zixia and
Zheng, Zilong and
Cao, Yixin and
Zhang, Meishan and
Lu, Wei and
Siddharth, N. and
{\O}vrelid, Lilja and
Xue, Nianwen and
Zhang, Yue",
booktitle = "Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)",
month = aug,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.xllm-1.19/",
pages = "218--241",
ISBN = "979-8-89176-286-2",
abstract = "Event-keyed summarization (EKS) requires summarizing a specific event described in a document given the document text and an event representation extracted from it. In this work, we extend EKS to the cross-document setting (CDEKS), in which summaries must synthesize information from accounts of the same event as given by multiple sources. We introduce **SEAMuS** (**S**ummaries of **E**vents **A**cross **Mu**ltiple **S**ources), a high-quality dataset for CDEKS based on an expert reannotation of the FAMuS dataset for cross-document argument extraction. We present a suite of baselines on SEAMuS{---}covering both smaller, fine-tuned models, as well as zero- and few-shot prompted LLMs{---}along with detailed ablations and a human evaluation study, showing SEAMuS to be a valuable benchmark for this new task."
}