The way to go searching in such a database is probaby to store the whole dataset in fixed-length records and then using binary search. It's trivial and fast.
Edit: if you make sure entries are sorted and every entry has an unique incremental contiguous ID then you can even implement binary search via SQL itself with just few primary key lookups (few = log_2(N))
Btw, if you really need high performance here use a key-value DB and just store all the 2^24 entries (16 million), then just drop the last ".xxx" from the IP and perform a single lookup that will return the "id" of the place. Then retrieve place:<id> key to get the real location.
Btw, if you really need high performance here use a key-value DB
Or a real relational database configured with a "key" column and a "value" column and let the wonders of proper clustering and indexing provide you with the data in realtime no matter what load you put on it, because this dataset is shit tiny.
No really. I'm dead serious.
If you feel like optimizing this further, you can ofcourse also save memory by shifting countries and cities to separate tables, this reducing the size of the dataset noticeably, maybe by up to 40%.
I'm just shaking my head at all the "database problems" MySQL users invent to excuse their choice of subpar DB.
Edit: if you make sure entries are sorted and every entry has an unique incremental contiguous ID then you can even implement binary search via SQL itself with just few primary key lookups (few = log_2(N))
Btw, if you really need high performance here use a key-value DB and just store all the 2^24 entries (16 million), then just drop the last ".xxx" from the IP and perform a single lookup that will return the "id" of the place. Then retrieve place:<id> key to get the real location.