Gao et al. (2020)#
Publication#
SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization
Repositories#
Our implementation uses this fork of the original repository
Available Models#
SUPERT
Description: A reference-free evaluation metric for multi-document summarization
Name:
gao2020-supert
Usage:
from repro.models.gao2020 import SUPERT model = SUPERT() inputs = [ {"sources": ["The first document", "The second"], "candidate": "The summary to score"} ] macro, micro = model.predict_batch(inputs)
macro
andmicro
are the averaged and per-input SUPERT scores.
Implementation Notes#
Docker Information#
Image name:
danieldeutsch/gao2020
Build command:
repro setup gao2020
Requires network: No
Testing#
repro setup gao2020
pytest models/gao2020/tests
Status#
[x] Regression unit tests pass
See here[ ] Correctness unit tests pass
No expected outputs given in the original repository[ ] Model runs on full test dataset
Not tested[ ] Predictions approximately replicate results reported in the paper
Not tested[ ] Predictions exactly replicate results reported in the paper
Not tested