I mean you wouldn't run a database on a GB10 device or cluster of them thereof. GH200 is another story, however, the potential improvement wrt the databases-in-GPUs still falls short of the question if there are enough workloads that are compute-bound in the substantial part of total wall-clock time for given workload.
In other words, and hypothetically, if you can improve logical plan execution to run 2x faster by rewriting the algorithms to make use of GPU resources but physical plan execution remains to be bottlenecked by the storage engine, then the total sum of gains is negligible.
But I guess there could perhaps be some use-case where this could be proved as a win.
In other words, and hypothetically, if you can improve logical plan execution to run 2x faster by rewriting the algorithms to make use of GPU resources but physical plan execution remains to be bottlenecked by the storage engine, then the total sum of gains is negligible.
But I guess there could perhaps be some use-case where this could be proved as a win.