Discussion about this post

User's avatar
?!'s avatar

If experts in every field are saying "AI can't do my job, but it can probably do everyone else's," why would we trust their beliefs about fields they *aren't* experts in, but not their beliefs about fields they *are* experts in?

Wouldn't it make more sense to assume that the reason everyone thinks AI can't do their *own* job is because they know enough about their own field to see the ways in which an AI output fails, but they think it can do other jobs because they aren't experts so can't tell why an AI output might look good but not actually hold up? A sort of "Gell-Mann amnesia" effect.

1 more comment...

No posts

Ready for more?