Big-O has nothing to do with running things fast. Although its a welcome and a nice side effect. Big-O optimisation is basically a way of finding means to reduce effort required to do anything. It just happens when you take fewer steps to do a thing, you also in most cases make it run faster. Though making it run faster was never the goal. Hence you must stop thinking it terms of making a program run faster. This causes confusion given you are not thinking about the problem from first principles. For example writing a vim macro is going from many steps to one command. Hence writing a vim macro is a O(1) operations. There are many such everyday things where you are doing N steps, which can be automated to doing it in 1 step.
In programming parlance there is only one way big work is done. Its either done with a for loop, or nested for loops. That is for(i=0;i<SomethingBig;i++){//do something with at speed of tick of i} or for(i=0;i<SomethingBig;i++){for(j=0;j<SomethingBig1;j++){//do all of tick of j repeatedly, at the tick of i}} Of course we can have as many sub loops, which in case the count of operations are ij...
Now here is how you think about making it efficient. Notice the i++ part of the for loop conditions to take the for loop forward? Most peoples minds are hard trained to think only in terms of i++, and nothing else. Hence it looks very hard to think anything else could work there. i++ moves the loop forward one element forward at a time. If you have a huge 'SomethingBig' it can take an eternity to get there. How else can we make the counter run fast. Increase i++ to i+2 for eg, is one way. But there are other ways like multiplication for eg i * 2, would mean you are going way faster now, and saving computations. Basically finding how to do that is algorithmic optimisation.
Notice as mentioned above using a tree achieves many such tasks.
Mathematics had concepts of convergence/bounds of infinite sequences since antiquity. Big-O is just mathematical notation that you can use for whatever you want. Big-O is useful because most of the time we want to reduce time complexity for algorithms when inputs get large, where constant and lesser terms usually become insignificant.
Some people measure steps, but we call it "time complexity" because they're essentially the same thing assuming that time taken for each step does not depend on `n`.
There are occasionally philosophical implications when you prove a complexity bound, so in those cases making stuff run faster isn't the goal, but unless you're deep into theoretical computer science, usually making code faster is your goal when you have to deal with time complexity.
FWIW, for loops isn't actually a primitive of CS or even programming. Of course you can achieve Turing completeness with C-style for loops, but that tweaking for loops isn't usually how people think about reducing time complexity...
Big-O has nothing to do with running things fast. Although its a welcome and a nice side effect. Big-O optimisation is basically a way of finding means to reduce effort required to do anything. It just happens when you take fewer steps to do a thing, you also in most cases make it run faster. Though making it run faster was never the goal. Hence you must stop thinking it terms of making a program run faster. This causes confusion given you are not thinking about the problem from first principles. For example writing a vim macro is going from many steps to one command. Hence writing a vim macro is a O(1) operations. There are many such everyday things where you are doing N steps, which can be automated to doing it in 1 step.
In programming parlance there is only one way big work is done. Its either done with a for loop, or nested for loops. That is for(i=0;i<SomethingBig;i++){//do something with at speed of tick of i} or for(i=0;i<SomethingBig;i++){for(j=0;j<SomethingBig1;j++){//do all of tick of j repeatedly, at the tick of i}} Of course we can have as many sub loops, which in case the count of operations are ij...
Now here is how you think about making it efficient. Notice the i++ part of the for loop conditions to take the for loop forward? Most peoples minds are hard trained to think only in terms of i++, and nothing else. Hence it looks very hard to think anything else could work there. i++ moves the loop forward one element forward at a time. If you have a huge 'SomethingBig' it can take an eternity to get there. How else can we make the counter run fast. Increase i++ to i+2 for eg, is one way. But there are other ways like multiplication for eg i * 2, would mean you are going way faster now, and saving computations. Basically finding how to do that is algorithmic optimisation.
Notice as mentioned above using a tree achieves many such tasks.