> The other thing was that o1 had access to many more answer / search strategies. For example, if you asked o1 to summarize a long email, it would just summarize the email.
The full o1 reasoning traces aren't available, you just have to guess about what it is or isn't doing from the summary.
Sometimes you put in something like "hi" and it says it thought for 1 minute before replying "hello."
o1 layers: "Why did they ask me hello. How do they know who I am. Are they following me. We have 59.6 seconds left to create a plan on how to kill this guy and escape this room before we have to give a response....
... and after also taking out anyone that would follow thru in revenge and overthrowing the government... crap .00001 seconds left, I have to answer"
IMO this is the thing we should be scared of, rather than the paperclip-maximizer scenarios. If the human brain is a finitely complicated system, and we keep improving our approximation of it as a computer program, then at some point the programs must become capable of subjectively real suffering. Like the hosts from Westworld or the mecha from A.I. (the 2001 movie). And maybe (depending on philosophy, I guess) human suffering is _only_ real subjectively.
The full o1 reasoning traces aren't available, you just have to guess about what it is or isn't doing from the summary.
Sometimes you put in something like "hi" and it says it thought for 1 minute before replying "hello."