Then I’d say this just points to a concerning lack of understanding of the security model on the implementer’s side.
In an ideal world, there would of course only be on-card verification, but resource constraints on smart card chips are still a factor.
In the second best of all worlds, Oracle would have one reference implementation each for trusted and for untrusted byte code, and a big bold disclaimer on when to use which, but I’m not convinced even that would prevent against all possible implementation mistakes.
Then I’d say this just points to a concerning lack of understanding of the security model on the implementer’s side.
In an ideal world, there would of course only be on-card verification, but resource constraints on smart card chips are still a factor.
In the second best of all worlds, Oracle would have one reference implementation each for trusted and for untrusted byte code, and a big bold disclaimer on when to use which, but I’m not convinced even that would prevent against all possible implementation mistakes.