Yeah that’s fine. But we still don’t have any tangible specifics in regards to what Apple needs to be reviewing for a locally hosted LLM. What can possibly be the criteria?
Probably an alternative version of this app or a similar app can provide an option to load your own models. Is that a problem for Apple to allow?
What I have found in my personal (and perhaps biased and anecdotal experience) is that there is a large cadre of LLM and AI hating people who inevitably start rambling about (a) safety and (b) copyright violations. Which I find reflect more about their mental model of the world and less about reality, in that they tends to be statists and collectivists that want a central authority to “protect” them. Which obviously get’s under my skin as that unfortunately mass instinct is what enabled big government, mass surveillance, and totalitarianism. Just my 2 cents, which undoubtedly many on HN may disagree with, and that’s fine, especially as I’m actually very interested in hearing more specifics from the AI safeguards crowd as admittedly perhaps I’m missing something terrible about this new hammer that has been invented, a hammer I find to be incredibly useful for nailing in all sorts of ways.
I'm not sure how Apple could make an explicit policy for this. My theory is that they won't, but rather are going to roll out their own LLM that run locally and is optimized for on-device hardware, which non-Apple code will not be able to use. This won't make all the LLMs go away, but it will make running very unattractive since they'll be battery hungry and slow compared to the official app.
Probably an alternative version of this app or a similar app can provide an option to load your own models. Is that a problem for Apple to allow?
What I have found in my personal (and perhaps biased and anecdotal experience) is that there is a large cadre of LLM and AI hating people who inevitably start rambling about (a) safety and (b) copyright violations. Which I find reflect more about their mental model of the world and less about reality, in that they tends to be statists and collectivists that want a central authority to “protect” them. Which obviously get’s under my skin as that unfortunately mass instinct is what enabled big government, mass surveillance, and totalitarianism. Just my 2 cents, which undoubtedly many on HN may disagree with, and that’s fine, especially as I’m actually very interested in hearing more specifics from the AI safeguards crowd as admittedly perhaps I’m missing something terrible about this new hammer that has been invented, a hammer I find to be incredibly useful for nailing in all sorts of ways.