While it may sometimes feel like your computer has a mind of its own; despite your frustration, deep down, you know it is really just following a set of commands programmed by a human.
But maybe that's not true anymore.
Google Engineer, Blake Lemoine recently revealed that he believed one of their artificially intelligent (AI) chatbot generators, LaMDA, has become a living, self-aware, being. According to multiple media reports; he was suspended from his job when his superiors dismissed the idea, and he went public with the information.
This of course, raises a lot of very intriguing questions, the biggest of course being whether it is actually true. Lots of experts don't think so, and we'll talk to one of them today, Dr. John Nicholas, Professor of Computer Information Systems at the University of Akron.
But what if it is true? What moral and ethical obligations do we have to this new life form that lives not in the physical world, but in the realm of silicon chips and cyberspace?
Have we really reached what is known as the "Technological Singularity"?
What happens if computers start thinking for themselves, and decide they don't really like us all that much, and don't want follow our directions? Are we approaching the Skynet-dominated world Arnold Schwarzenegger made famous in The Terminator, or the nuclear holocaust-threatening nightmare from the 1983 movie starring Matthew Broderick, "War Games?"
Dr. John Nicholas, University of Akron