•
Type of the Paper
◦
Survey
Hallucination Categorization
→ Intrinsic Hallucination : Output이 Source와 모순되는 대답을 하는 경우
→ Extrinsic Hallucination : Source만을 가지고 Output에 모순되는지 조차도 판단할 수 없는 경우 (pretrain knowledge기반으로 뱉는거라 이 대답에 대한 Factual한 가치판단을 내릴 수 없다는게 가장 큰 문제)
e.g)
Source text: The first vaccine for Ebola was approved by the FDA in 2019 in the US, five years after the initial outbreak in 2014. To produce the vaccine, scientists had to sequence the DNA of Ebola, then identify possible vaccines, and finally show successful clinical trials. Scientists say a vaccine for COVID- 19 is unlikely to be ready this year, although clinical trials have already started.
Instrinsic Hallucination: The first Ebola vaccine was approved in 2021
Extrinsic Hallucination: China has already started clinical trials of the COVID-19 vaccine.
Origin of Hallucination
→ Hallucination from Data
•
Heuristic data collection
◦
Fine-tuning: Target Reference가 Source로부터 supported되지 못하도록 data가 collected 되었음
◦
Pretraining: 중복된 example들이 제거하지 못할 경우 반복생성 → hallucination
•
Innate divergence
◦
diversity한 output이 중요한 task dataset으로 training할 경우는 hallucination이 일어날 수 밖에 없음 (chit-chat, open-domain dialoague)
→ Hallucination from Training and Inference
•
Imperfect representation learning
◦
encoder가 defective comprehensions ability (학습이 덜된거 같은데…)
◦
encoders learn wrong correlations between different parts of the training data (학습이 덜된거 같은데…)
•
Erroneous decoding
◦
cross attention이 부적절하게 걸린 경우
◦
randomness를 부여하는 decoding strategy 자체의 문제 (eg., top-k)
•
Exposure Bias
◦
Teacher forcing training과 예측된 값을 기반으로 계속 예측을 수행해가는 두 모델 간의 discrepancy → erroreous generation
•
Parametric knowledge bias
◦
pre-trained knowledge를 input에 비해서 우선적으로 뱉는 경향성 때문에
Metric of Hallucination
Model-based Metric (Source - Output Pair를 가지고 실험을 할 수 있음)
•
Information Extraction
◦
Source/Reference에서 verification이 필요한 부분을 추출해낸다
(ex. ‘Brad Pitt was born in 1963. → (Brad Pitt, born-in. 1963))
•
QA-based
◦
generated output으로 question을 만들고 source로 대답을 할 수 있으면 hallucination이 없다고 가정하는 setting
•
Natural Language Inference Metrics
◦
premise:source / hypthesis: generated output
◦
Information Extraction & QA보다는 lexical variability에 robustness하다는 강점이 있다
→ Source text: The first vaccine for Ebola was approved by the FDA in 2019
→ Generated Output: The first medicine for Ebola was approved by the FDA in 2019
vaccine →medicine 의 관계를 잡아낼 수 있는건 NLI metric 밖에 없어보임..!
•
Faithfulness Classification Metrics
◦
synthesize dataset
•
LM-based Metrics
◦
LM으로 hallucination controll하는 방법에 초점이 맞춰져 있는듯
Human Evaluation
Hallucination Mitigation Methods
•
Data-related method
•
Modeling and Inference method
◦
Architecture
▪
Encoder: learn better representation
▪
Attention: encourage the generator to pay more attention to the source
•
[88] employ sparse attention to improve the model‘s long-range dependencies in the hope of modeling more retrieved documents so as to mitigate the hallucination in the answer.
•
Wu et al. [210] adopt inductive attention, which removes potentially uninformative attention links by injecting pre-established structural information to avoid hallucinations
▪
Decoder
•
decoder structure를 건들거나
•
decoding 전략을 건들거나
•
Training
◦
Planning/Sketching
◦
Reinforcement Learning (RL): some RL reward functions for mitigating hallucination are inspired by existing automatic evaluation metrics (강화학습의 reward function들은 automatic metric을 기반으로 구축되었음)
◦
Multi-task Learning: NLI + Summarization → Less Hallucination
◦
Controllable Generation: Randomenss를 포기하면 diversity가 줄어들지만 faithfulness가 향상된 답변을 얻을 수 있음