Alignment refers to the process of aligning AI with human values. I don't see why a superhuman AI would require different prompting than is in use today.
The idea is that keeping a superhuman AI aligned would require superhuman prompting. This is the whole premise of OpenAI's SuperAlignment research and recent publication.