Self Driving Cars may not happen
I have doubts about self-driving cars. The entire argument rests on analogy with software translators.
Software translators began as simple dictionaries. You type in ‘cat’, it returns ’le chat’. Next, programmers started to feed in simple rules, so ‘black cats’ could return ’les chats noire’. The rules became so advanced that eventually, the computer could take simple sentences such as ‘I see a cat’, and get the right answer back 90% of the time.
From an outsider’s perspective, it seems like we’re getting there. It seems like the last step - those few finicky sentences which we’ve yet to teach a computer - are all that stand between us and a fully-working Artificial Intelligence. But the last step will not happen in our lifetime, no matter how much money people throw at it.
Let’s take that last sentence and drill into why it won’t happen. ‘Last step’, could mean ‘step’ in the sense of a stage within a task, or ‘step’ as in stairs. In order to understand which, the translating computer must understand we’re talking about a process, and therefore this means ‘step’, in the sense of ’task’, but the computer can’t understand that, because it can’t understand anything.
These problems arise constantly.
Should we translate English ‘or’ into Polish ’lub’, or ‘czy’? One means ‘or’ as a statement, like ’tea or coffee, either are fine’, and the other means ‘or’ as a question, like ’tea or coffee, which would you like?’.
If a computer finds the Polish sentence ‘kot wraca’, how does it know if this is ‘a cat returns’ (i.e. a new one), or ’the cat returns’ (i.e. the cat we’ve already been introduced to)?
When a book is ‘curious’, it means the book is strange and interesting. When a person is ‘curious’, it means they’re actively interested in something. When a cat is ‘curious’, it could mean either. What general rule could we possibly use to tell a computer which is which? A person can see that a book is not interested in anything, because it’s not sentient. How do we explain to a computer which things are and are not sentient?
The Necessity of ML
No computer can answer any of these questions, because the answer requires understanding the context, and computers understand nothing. In order to have a computer translate a standard paragraph, we must first create general artificial intelligence, and then teach it how to speak, then finally, how to speak a second language.
Self-driving cars have been almost ready for a while, and I hope they’re ready soon. But I wonder if that last little bit isn’t so little. Perhaps cars will always have trouble telling the difference between a child running across the road and a plastic bag. Perhaps they won’t be able to play well with cyclists because they can’t overtake, or they overtake wrong.
Part of driving is understanding that someone will react to you, with the assumption that they know you will react to them, and so on, back and forth. It might seem a trivial thing to pass someone with a knowing glance. But then again, the difference could require a massive step forward in how much cars understand.