This seems like a perfect use case for "agentic" AI. OpenAI can enrich the context window with the strengths and weakness of each model, and when a user prompts for something the model can say "Hey, I'm gonna switch to another model that is better at answering this sort of question." and the user can accept or reject.
Yeah I rolled my eyes at that, but publishers basically require the front 10% of non-fiction be like that. It's for the people walking through book stores and picking up books to browse.
I guess I'm not sure what you mean by "doesn't work." I've gone through publishers and self-published. They come with different tradeoffs including the complaints about how publishers want a book to flow as described upthread.
> I think there are two main reasons, the first being the sheer intellectual difficulty of crafting an informed political view leads people to tribalism out of convenience.
What's the difference between tribalism and deferring to experts on complex subjects, e.g. climate change? I have a deep skepticism of people who think they can personally reason through any complex topic from first principles. It shows a lack of humility and self-awareness. Nobody has the time to build that kind of expertise in every domain, and there is wisdom in deferring to the hard won experience of others. But the type to think they can reason through everything seems like the type to call this "tribal politics."
I liken AI hype to crypto hype. There will be always be some wide-eyed techno-optimist rube waxing poetic about how AI will open the door towards a 4-day workweek, just like how crypto will strengthen civil society by reducing the influence of banks in handling commerce.
The reality is a lot simpler (and darker) when you realize who the buyers of this technology are, and what they want from it. And what they want is always the same...more, and faster.
reply