This video suggests claims that āsmartā / experienced people may be even more susceptible to being wrongly persuaded by AIās output than novices. I find its claims to be somewhat conceptual and would like the points to be more strongly argued.
If it is right, however, it claims that whilst novices are more likely to ask if something is true, experienced users tend to ask if it makes sense. Confident, clear and well structured information tends to give people a feeling of understanding - in other words, they do feel it makes sense and therefore trust it, even when the content is false. Since AI output tends to fall into the category of āconfident, clear and well structuredā, experienced users may fall into the trap of mistakenly trusting it. Instead of asking āis this true?ā, they ask ādoes this align with my mental modelā or ācan I see how this would work?ā.
The key vulnerability outlined by the video, therefore, is this shift from ātruth-testingā to ācoherence-testingā.
ā° It might be interesting to revisit the coherence model of epistemology to see whether it can shed any light on this observations.