top of page
Abstract
This article looks at topics relating to AI consciousness from the perspective of philosophical zombies: metaphysically possible beings that do not have phenomenological experiences. While there is little possibility that philosophical zombies (p-zombies) exist, it seems like AI zombies do. First, I discuss the different definitions of consciousness, focusing on phenomenological consciousness. I then discuss Anil Seth’s ‘Real’ problem and David Chalmers’ ‘Hard’ problem of consciousness. I then draw a distinction between AI and the idea of a philosophical zombie. I present the Chinese room and Mary’s room thought experiments that claim that functionalism is lacking and that AI zombies do not have consciousness. However, the problem of other minds suggests that you can never know for certainty if anyone other than yourself is conscious. This poses an ethical issue when considering AI. If you can never know if AI is a zombie, should we treat them like other people? I draw an analogy with animal rights and argue that we should treat AI ethically to prevent possible ethical problems in the future.
.
.

bottom of page

