Actually it reflects the idea of Unicode code points correctly. They are meant to represent graphs, not semantics.
This isn't honored; we have many Unicode code points that look identical by definition and differ only in their secret semantics, but all of those points are in violation of the principles of Unicode. The Turkish 'i' is doing the right thing.
How do you define "look identical" outside of fonts which from my understanding were excluded from Unicode consideration on purpose?
E.g. Cyrillic "а" looks the same as Latin "a" most of the time, they both are distant descendants of the Phoenician 𐤀, but they are two different letters now. I'm very glad they have different code points, it would be a nightmare otherwise.
No, Han unification is a completely different thing. Unicode Han unification represents distinct glyphs as the same code point - the intent is that you choose the glyph you want by setting a font (!). This has been acknowledged as a mistake.
Having distinct code points for Latin capital letter A, Greek capital letter A, and Cyrillic capital letter A is the reverse, separate code points for glyphs that are identical by definition. That's also a mistake.
(Although it might be required by Unicode's other principle of being fully compatible with a wide variety of older encodings. There are many characters, like 囍, that don't qualify to have a code point, but that have one anyway because they're present in an encoding that Unicode commits to represent.)
This isn't honored; we have many Unicode code points that look identical by definition and differ only in their secret semantics, but all of those points are in violation of the principles of Unicode. The Turkish 'i' is doing the right thing.