Hi. ILP is Inductive Logic Programming, a form of logic-based, symbolic machine learning. ILP "models" are first-order logic theories that are not differentiable.
To put it plainly, most ILP algorithms learn Prolog programs from examples and background knowledge which are also Prolog programs. Some learn logic programs in other logic programming languages, like Answer Set Programming or constraint programming languages.
The wikipedia page on ILP has a good general introduction:
It's a bit old now and misses a few recent developments, like learning in ASP and meta-interpretive learning (that I work on).
If you're interested specifically in differentiable models, in the last couple of years there has been a lot of activity on the side of mainly neural networks researchers interested in learning in differentiable logics. For an example, see this paper by a couple of people at DeepMind:
To put it plainly, most ILP algorithms learn Prolog programs from examples and background knowledge which are also Prolog programs. Some learn logic programs in other logic programming languages, like Answer Set Programming or constraint programming languages.
The wikipedia page on ILP has a good general introduction:
https://en.wikipedia.org/wiki/Inductive_logic_programming
The most recent survey article I know of is the following, from 2012:
(ILP turns 20 - Biography and future challenges) https://www.doc.ic.ac.uk/~shm/Papers/ILPturns20.pdf
It's a bit old now and misses a few recent developments, like learning in ASP and meta-interpretive learning (that I work on).
If you're interested specifically in differentiable models, in the last couple of years there has been a lot of activity on the side of mainly neural networks researchers interested in learning in differentiable logics. For an example, see this paper by a couple of people at DeepMind:
(Learning explanatory rules from noisy data) https://arxiv.org/abs/1711.04574
Edit: May I ask? Why is automatic differentiation the "obvious" question?