I have a hunch nvidia's mipmapping algorithm changes if you open nvidia control panel and change texture filtering to "high performance" vs "high quality"
I've noticed that too. I think it depends on the topic. Many are biased when it comes to Apple (me too). Some even defend their misbehaving. My emotional reaction to Apple misbehaving is usually anger because I somehow feel disappointed, even betrayed by Apple. "Apple, how can you build such gorgeous hardware and still act so unethical?". This is of course irrational: Apple is a company that does everything to grow.
My view is that a large group of people interested in building companies, tools, etc moved on and only come back when there’s a PR issue that pushes then to comment. What’s left behind is basically the same old /. esque crowd from the internet of old.
Seems to be badly phrased and meant something else, since macOS is certified to be UNIX - https://www.opengroup.org/openbrand/register/ - contrary to Linux which is not UNIX-certified.
End result - Apple is forced to do whatever Google wants.
I find it hard to imagine a company - that cares about its own future - would agree that they are required to implement things that their _competitor_ decides.
That scenario will just hand over the monopoly keys to Google, and we're back to square one.
You're essentially programming using English. Anything that isn't mentioned explicitly - the model will have a tendency to misinterpret. Being extremely exact is very similar to software engineering when coding for CPU's.
1. The text is _engineered_ to evoke a specific response.
2. LLM's can do more than answer questions.
3. Question answering usually doesn't need any prompt engineering, since you're essentially asking an opinion where any answer is valid (different characters will say different things to same question, and that's valid).
4. LLM's aren't humans, so it misses nuance a lot and hallucinates facts confidently, even GPT4, so you need to handhold it with "X is okay, Y is not, Z needs to be step by step", etc.
I want, for example, to make it write an excerpt from a fictional book, but it gets a lot of things wrong, so I add more and more specifics into my prompt. It doesn't want to swear, for example - I engineer the prompt so that it thinks it's okay to do so, etc.
"Engineer" is a verb here, not a noun. It's perfectly valid to say "Prompt Engineering", since this is the same word used in 'The X was engineered to do Y' sentence.
>The text is _engineered_ to evoke a specific response.
My grandma can say she engineered Google search to give search results from her location.
> "Engineer" is a verb here, not a noun. It's perfectly valid to say "Prompt Engineering", since this is the same word used in 'The X was engineered to do Y' sentence. >
You guys are just looking for ways to make people feel like they are doing something big in prompting AI models for whatever tasks, even with custom instructions etc
I know the word Engineer can be used in various ways, "John engineered his way to premiership", "The way she engineered that deal" etc, if it's the way it's being used here fine then.
There is a reason why graphic designers have never called themselves graphic engineers
Your grandma can say she engineered Google but clearly you cant because all it takes is a few minutes to look at the history of the term to answer your own questions. I realize some folks are salty they paid a ton of money for the idea that a piece of paper gives them some sort of prestige. And it does, to 0.001 of humans in the world who are associated with whatever cul...I mean institution that sold you something that is free, with a price premium and a cherry of interest on top. All so you would feel satisfied someone, anyone, finally acknowledged your identity. A great deal of the engineers that built the modern internet never got a formal degree. But they did get something better: real practical experience attained via tinkering.
> I realize some folks are salty they paid a ton of money for the idea that a piece of paper gives them some sort of prestige.
Actually the paper does, but my issue is not papers, rather knowledge. The level of knowledge needed for something to be called engineering
And I have noticed your answers relate prompt engineering to software engineering/programming questions. But if you look at that OpenAI doc, even asking to summarise an article is prompt engineering.
> A great deal of the engineers that built the modern internet never got a formal degree. But they did get something better: real practical experience attained via tinkering.
We have a lot of carpenters, builders, mechanics with no formal education that we call Engineers in our everyday life without any qualm because of their knowledge and experience. Don't look at it only from the lens of software engineering.
I still maintain prompting an AI model doesn't need to be called engineering.
If you are a developer doing it through an API or whichever way, you still doing whatever you've been doing before prompting entered the chat.
Maybe the term will be justified in the future.
Side Note: This conversation led me to Wikipedia (noticed some search results along the way). This prompt business is already lit, I shouldn't have started it
First document is an forum thread full of "go fuck yourself fucking do it", and in this kind of scenario, people are not cooperative.
Second document is a forum thread full of "Please, take a look at X", and in this kind of scenario, people are more cooperative.
By adding "Please" and other politness, you are sampling from dataset containing second document style, while avoiding latent space of first document style - this leads to a model response that is more accurate and cooperative.