Interesting study basically coming to the conclusion that Neural networks can be fooled by lies. Hell, so can we! I think this means they're a 'good enough' model for intelligence -- that is, when coupled with a way of experiencing reality (sensors), and using that as the ultimate judge of that which is real and that which is unreal. Worked well enough for us, eh?