One is the many confusions in the field of AI is the mixing of the idea of intelligence and consciousness.
Humans are both conscious AND intelligent beings. There is possibly a dependency of one on the other, especially of consciousness on intelligence, but they are distinct features of our mind.
Consciousness in its simplest form can be defined as the “awareness of our own being – of our own existence moment after moment relative to the world”. This itself is a disputed definition but can be a good starting point to understand consciousness. Even this simplistic definition is mind boggling if you think about it. We don’t yet understand why there was a need for such self awareness in higher order animals like humans and how this feature evolved. The current assumption is that lower order amoeba has almost no such self awareness.
Consciousness as a feature has not yet been replicated in machines even in its most rudimentary form. Consciousness is still in the realms of philosopical discussions and slowly moving into the domain of neuroscience. Hopefully someday it might enter the mathematical/engineering domain, but we are not there yet.
In its simplest form intelligence is the ability to “acquire and apply knowledge and skill”
There are varied opinions on this definition too, but there is enough understanding that allowed for the idea to enter into the computational space. What we see today as AI systems are our effort to simulate intelligence in machines.
Even the lowly calculator was an effort to replicate mathematical intelligence, and today a calculator is “smarter” than us in its ability to perform mathematical operations. But interestingly we dont worry much about the calculator being smarter than us.
Intelligence is measurable, be it logical, mathematical, creativity or even social intelligence. Along each dimension we can have a rough definition of what intelligence is and how to measure it. This makes it possible to simulate a version of this intelligence in machines, which we have been attempting to do with good success in the recent years.
Fear of AI in this context
People perceive AI differently based on their understanding of the topic.
Fear of conscious AI – People mostly un-related to the field and read articles bout AI in articles and op-eds assume we are en-route to building a human like intelligence with consciousness and intelligence combined. An image that gets conjured up is a machine with intent. In a dystopian version, even with an intent to destroy us. This imagined world is something we are not even remotely close to. Conscious intelligent systems are theoretically possible, but we have made no headway in that direction.
AI as a smarter software – AI engineers/scientists think of the systems they are building as machines that have the ability to recognize pattern and apply it back to achieve some useful purpose. They simply cannot fathom a world where this pattern learning/application system suddenly becomes one with intent and turns destructive.
Fear of decision making systems – There is a third category which includes philosophers and scientists who have abstracted the possibilities/challenges we are going to encounter in the future. In this world view we are en-route to a much more complex AI (without consciousness) that will be built by engineers who do not have a good understanding of the unintended consequences of the goals they set into these machines. This can result in actions by the machines with bad consequences for us. A few examples:
- A system continuing the racial or gender bias within an organization by “learning” from the organization itself (This is already a challenge today).
- A Car that decides (algorithmically) to kill the driver to save 2 pedestrians or vice-versa. This is a challenge we are going to face very soon, but we dont have a good handle of the legal and ethical implications yet.
- (Warning: Futuristic wonky example) A scenario where a machine built to anticipate the wellbeing of a human get a command of “help me end my mental agony” after a divorce and goes ahead to kill the person painlessly as the most effective way of ending the suffering.
This category of concerns is legitimate, and needs further thinking by both philosophers and engineers. Only engineers will not be able to figure this out. We need to think of these issues without jumping into the wonky scenarios too early.
As for a conscious AI, we will have to experience them in movies for a few more decades before having to worry.