Because the world has had thousand years to evolve, adapt and settle around the capabilities and limitations of the human brain. It is somewhat stable and predictable.
Now if a new species of humans suddenly emerged with a brain not only has capable as our but faster, more adaptable and extensible, it would disrupt the world.
Ecosystems (what I infer you mean by "the world", since things that are part of the world such as gravity, plate tectonics, etc don't follow the laws of evolution) haven't had thousands of years to evolve, adapt, and settle around the capabilities that we've had since the industrial revolution started. AI is merely an aspect and consequence of the logical progression of that continuity.
(In terms of tangible effect on the world today and what seems on track for the few decades to come, AI is still far behind coal - to pick one - when it comes to concrete negative externalities tho)
I think something is wrong with the logic of your first paragraph.
Suppose (by way of analogy; this is not meant to be directly addressing AI) that in the next year some ingenious physicists finally get quantum gravity figured out, and that soon after the papers are published someone notices that the theory implies that one can use common household materials to make a device capable of destroying a planet.
(Seems unlikely, but I don't think anyone would have predicted that the last revolutionary theory of gravity we worked out would imply that one can use not-so-common materials to make a reasonably small device capable of destroying a city, but it turns out it does.)
It seems fairly likely that (1) the knowledge of how to do this could not be suppressed for ever, and that (2) if making the device turns out not to need all the fancy apparatus and hard-to-get materials that e.g. hydrogen bombs require, it wouldn't be that long before some idiot actually does it and destroys the earth.
And yet, these new discoveries would be "merely an aspect and consequence of the logical progression of that continuity", as you put it.
So, whatever the actual situation is with AI, I think it is demonstrably not the case that we can be confident AI won't somehow kill us all or destroy the world merely because AI is a thing we made and the world has had thousands of years to adapt to us.
Maybe everything will be fine, maybe not, but figuring out which will require looking in more detail rather than handwaving about how ecosystems always adapt.
So both humans and AI could theoretically destroy the planet if allowed to continue to evolve. Heck, ants might do that too. Does that establish anything?
Nope. It means that if you are concerned about the possibility of something destroying the planet (or whatever), you actually have to look at what it is and what it can do and so forth.
It isn't and it could well be much, much worse depending on some of the initial conditions, plenty of which we haven't really charted to the point that we can infer outcomes from inputs. It's chaos theory all over but this time with an unpredictable black box in the middle.
Now if a new species of humans suddenly emerged with a brain not only has capable as our but faster, more adaptable and extensible, it would disrupt the world.