Part 2: Trust, Accountability, and Human Oversight
IN THIS MODULE
Why AI Can Be Confident and Wrong
AI often sounds confident, even when it’s wrong. That’s because it’s designed to give an answer, not to judge if it’s correct.
AI can “hallucinate” facts that aren’t true
It can give convincing but inaccurate answers
It often sounds polite or authoritative but don’t be fooled
Example:
AI might tell you a false statistic with perfect grammar. It “believes” nothing; it’s just predicting plausible text.
Key takeaway:
Always double-check AI outputs. Confidence doesn’t mean correctness.