That's like asking "programs that execute within a predictable scope for what?"
For whatever they're being written for. Alignment's goal is to have models to do what they're being trained to do and not other random things. It won't be uniform; for example, determining "what does inappropriate mean" will vary between countries.
More like self driving cars that stay within the lines instead of treating everything as an offroad opportunity. If you wanna use rifles from them or mod them to run people over, that's on you right?