One problem is that depending on the book, the LLM can be somewhat cagey about if it really has read the book, or what sources it has used in constructing its “understanding” of it. Due to its sycophancy bias, the machine will laud your understanding of the text without a real referent to back it up, giving you what is essentially bullet point.
This may hack it for cursory knowledge, like how I know the rough outline of Ulysses, despite having never read it, but it cannot supplant a real textual reading of it. Despite all of this, LLMs can be great for comparing and contrasting writers and thinkers, and can help you organize your thoughts.
This may hack it for cursory knowledge, like how I know the rough outline of Ulysses, despite having never read it, but it cannot supplant a real textual reading of it. Despite all of this, LLMs can be great for comparing and contrasting writers and thinkers, and can help you organize your thoughts.