Server side things. Machine learning, on cpu die network switch, various forms of offloading (SSL, compression, possibly hypervisor stuff).
It will be awhile before it shows up in consumer gear, as the use cases are not there yet. Consumers may still benefit as when someone figures out something amazing for it to do, they will get a hardened version of it.
> various forms of offloading (SSL, compression, possibly hypervisor stuff).
I wonder whether Intel will allow that. Better hardware offloading for various algorithms (SHA, RSA, AES, …) and hypervisor acceleration (VT-x, VT-d, EPT, APICv, GVT, VT-c, SRIOV, …) have been one of the main selling points for new CPU generations. An FPGA would render most of them moot by allowing operators to configure whatever offloading they need without requiring new, expensive Intel chips.
Today's FPGA's take maybe 10x-20x more chip area, hence much more expensive than the same logic implemented in an ASIC,Same goes with power, so selling FPGA's would be more profitable in and of itself, plus having a full ecosystem doing r&d on FPGA algorithms which Intel can later build into chips and sell.
Intel can always release versions with more fabric, better access to the cores, main memory, caches and system devices.
It will also open up the ability for them to separately sell offloading features as IP cores. Also the risk of them having to disable a feature because of an error goes down as they can easily issue an update for it.
Custom circuitry will always be much more efficient than the same thing on an FPGA, so it should be easy to outcompete the FPGA for a task. The FPGA can help Intel with market research for what to put in silicon, just look at what is popular to put there.
The virtualization stuff isn't offloadable. They're a bunch of invasive changes to the memory and I/O paths of the processor core, not stuff that could be handed off to a coprocessor.
You could route a lot of the I/O paths through the FPGA if it was suitably wired. A slight latency bump to make aspects of virtualization never require any cycles to be spent handling them.
They can always sign the FPGA bitstreams so that only approved bitstreams are allowed to run on matching approved silicon. Similar to CPU binning, certain CPUs would only be able to load certain bitstreams depending on fuse flags.
It will be awhile before it shows up in consumer gear, as the use cases are not there yet. Consumers may still benefit as when someone figures out something amazing for it to do, they will get a hardened version of it.