This time, not of politicians, but of artificial intelligence software. In a Wall Street Journal article centered on Alphabet’s Google firing of the Google AI engineer Blake Lemoine, there was this cite of AI specialists:
AI specialists generally say that the technology still isn’t close to humanlike self-knowledge and awareness.
That seems a bit narrow and self-important to me. What’s the basis—factual/empirical, logical/philosophical, any other—for seriously claiming that self-knowledge and awareness have to be human-like in order to exist?