Justin E. H. Smith in his Substack newsletter, The Hinternet:
I would like at least to begin here an argument that supports the following points. First, we have no strong evidence of any currently existing artificial system’s capacity for conscious experience, even if in principle it is not impossible that an artificial system could become conscious. Second, such a claim as to the uniqueness of conscious experience in evolved biological systems is fully compatible with naturalism, as it is based on the idea that consciousness is a higher-order capacity resulting from the gradual unification of several prior capacities —embodied sensation, notably— that for most of their existence did not involve consciousness. Any AI project that seeks to skip over these capacities and to rush straight to intellectual self-awareness on the part of the machine is, it seems, going to miss some crucial steps. However, finally, there is at least some evidence at present that AI is on the path to consciousness, even without having been endowed with anything like a body or a sensory apparatus that might give it the sort of phenomenal experience we human beings know and value. This path is, namely, the one that sees the bulk of the task of becoming conscious, whether one is an animal or a machine, as lying in the capacity to model other minds.
More here.