@Theo
I don't know what "genuine intelligence" is
You might not be able to define it to the extent of having a scientific theory, but you likely are able to reasonably compare examples of relative intelligent behavior. If I were to query you, or even an inexperienced student driver, on what you thought a lane end/merge marking looked like, or a STOP sign, or whatever, it is a pretty safe bet you are going to demonstrate that you have learned something at a much higher level of abstraction than what the current crop of machine learning functions at.
I don't think we can deduce that from the paper.
No deduction is necessary. The technology is what it is, and can be evaluated for flaws independent of any particular application. Bruce has covered other similar attacks, and Boing Boing also has an even more comprehensive list of machine learning failures.
Note that despite all the features humans still sometimes end up driving the wrong way. Usually it's DWI, but occasionally it's quite inexplicable.
What failings individual humans have is not at issue. The real concern here is how significant errors in automation can result in dangerous behavior at a previously unknown scale. Yes, hundreds die every day on America's highways due to human error. It'd be good if automation could eliminate those deaths, but not if we're just trading random accidents for scores of cars designed to run amok simply because a bird happened to poop in just the wrong pattern. And that's to say nothing of deliberate offensive hacking that could cause possibly millions of cars to crash all at the same time. @Phaete
Don't try to convince me that you can see more accurate on low resolution photos then on high res ones.
That should actually be a pretty easy task if you take the time to understand how this technology works. Again, the underlying problem is overtraining on particular data points in the sample set. When you try to teach the neural network what a STOP sign is, for example, it doesn't learn the same things a human does. It has no understanding of octagons, red backgrounds, white printing, Roman letters, or English words. It just hones in on what pixels are the most representative of the object being shown. The reason we see "unobtrusive" failures is because just those particular bits of a scene/object are replaced by the same small set of bits that were trained to be recognized as some other object. Whether by chance or intentionally, you now have a system that is failing because of the higher resolution training data. What lower resolution images do is allow you to guide training away from the details and back to what more closely represents the higher-level concepts a human would be learning. A blurry reddish blob at a corner in the distance has a very good chance of being a STOP sign, even if you can't make out the details. Such a system (especially backed by genuine AI) could indeed be more accurate/safer than a stupid high resolution system that can easily be fooled into giving false positives with 100% confidence.
If you can't train the correct algorithms for high res photos then that is a skill/knowledge (or money) problem, not a resolution problem.
My point remains that there currently are no "correct algorithms", because very few people are working forward from a theory of intelligence. Without that, we aren't able to build systems that have the ability to correct incorrect learning.
And ofc, if your recognition is trained for low res pictures, it will do bad at high res and better at low res. That's just a user error.
Strongly disagree. Again, intelligence is simply a game changer. Even without images of any kind, I could teach a child to figure out what different kinds of road signs will look like. And it'd take more than a few misplaced dots for them to give a false positive otherwise. a5b65e6f9b374a3bfef48feb5eed64609ac8f352db423da3c05574106a4f4d87