The cognitive load associated with abstractions that the author seems to mention isn’t caused by abstractions, but by leakiness or inadequacy of the abstraction family.
In an ideal setting, when solving a problem, we first produce a language for the domain of discourse with a closure propery, i.e., operations of the language defined over terms of the language produce terms in the same language. Abstraction is effectively the process of implementing terms in the language of this domain of discourse using some existing implementation language. When our language does not align with the language of discourse for a problem, this adds cognitive load, because instead of remaining focused on the terms of the language of discourse, we are now in the business of tracking book keeping information in our heads. That book keeping is supposed to be handled by the abstraction in a consistent manner to simulate the language of discourse.
So you end up with a hodgepodge of mixed metaphors and concepts and bits and pieces of languages that are out of context.
Of course, in practice, the language of discourse is often an evolving or developing thing, and laziness and time constraints cause people to do what is momentarily expedient versus correct. Furthermore, machine limitations mean that what might be natural to express in a language of discourse may not be terribly efficient to simulate on a physical machine (without some kind of optimization, at least). So you get half measures or grammatically strange expressions that require knowledge of the implementation constraints to understand.
In an ideal setting, when solving a problem, we first produce a language for the domain of discourse with a closure propery, i.e., operations of the language defined over terms of the language produce terms in the same language. Abstraction is effectively the process of implementing terms in the language of this domain of discourse using some existing implementation language. When our language does not align with the language of discourse for a problem, this adds cognitive load, because instead of remaining focused on the terms of the language of discourse, we are now in the business of tracking book keeping information in our heads. That book keeping is supposed to be handled by the abstraction in a consistent manner to simulate the language of discourse.
So you end up with a hodgepodge of mixed metaphors and concepts and bits and pieces of languages that are out of context.
Of course, in practice, the language of discourse is often an evolving or developing thing, and laziness and time constraints cause people to do what is momentarily expedient versus correct. Furthermore, machine limitations mean that what might be natural to express in a language of discourse may not be terribly efficient to simulate on a physical machine (without some kind of optimization, at least). So you get half measures or grammatically strange expressions that require knowledge of the implementation constraints to understand.