This a common fallacy deriving from having low level knowledge of a system without sufficient holistic knowledge. Being "inside" the system gives people far too much confidence that they know exactly what's going on. Searle's Chinese room and Leibniz's mill thought experiments are past examples of this. Citing the source code for chatGPT is just a modern iteration. The source code can no more tell us chatGPT isn't conscious than our DNA tells us we're not conscious.