An iOS app which connects parents with their children's screen time via screenshots and AI. Makes your kid's screen as visible as the living room TV. When screenshots are off, you choose what to allow; everything else is blocked. When screenshots are on, you choose what to block; everything else is allowed.
I have found AI generated code to be overly verbose and complex. It usually generates 100 lines and I take a few of them and adapt them to what I want. The best cases I've found for using it are asking specific technical questions, helping me learn a new code language, having it generate ideas on how to solve a problem for brainstorming. It also does well with bounded algorithmic problems that are well specified i.e. write a function that takes inputs and produces outputs according to xyz. I've found it's usually sorely lacking in domain knowledge (i.e. it is not an expert on the iOS SDK APIs, not an expert in my industry, etc.)
My heuristic: the more you're solving a solved problem that is just tedious work and memory intensive take a crack at using AI. It will probably one shot your solution with minimal tweaks required.
The more you deviate from that, the more you have to step in.
But given that I constantly forget how to open a file in Python, I still have a use for it. It basically supplanted Stackoverflow.
"Reading other people’s code is part of the job. If you can’t metabolize the boring, repetitive code an LLM generates: skills issue! How are you handling the chaos human developers turn out on a deadline?" Good point! Treat AI generated code as if somebody else had written it. It will need the same review, testing and refactoring as that.
You too? I got the same. But then again, my account is constantly under attack anyway. At least previously they were smart enough to not trigger 2FA. Now even the incompetents are trying.
It makes me wonder if we really so different than those who carved metal images to worship thousands of years ago. I made a 30s video exploring this idea here: https://www.youtube.com/watch?v=B7RoeHHqnAM
Is there a performance benefit for inference speed on M-series MacBooks, or is the primary task here simply to get inference working on other platforms (like iOS)? If there is a performance benefit, it would be great to see tokens/s of this vs. Ollama.
This reminds me a recent chat I had with Claude, trying to identify what looked like an unusual fossil. The responses included things along the lines of "What a neat find!" or "That's fascinating! I'd love it if you could share more details". The statements were normal nice things to hear from a friend, but I found them pretty off-putting coming from a computer that of course couldn't care less and isn't a living thing I have a relationship with.
This sort of thing worries me quite a bit. The modern internet has already sparked an awful lot of pseudo or para-social relationships through social media, OnlyFans and the like, with serious mental health and social cohesion costs. I think we're heading into a world where a lot of the remaining normal healthy social behavior gets subsumed by LLMs pretending to be your friend or romantic interest.
Trying to find the silver lining in this makes me God's advocate I guess?
I was able to reflect a lot on my upbringing by reading reddit threads. Advice columns, relationships, parenting advice, just dealing with people. It was great to finally have a normalized, standardized world view to bounce my own concepts off. It was like an advice column in an old magazine, but infinitely big. In my early 20s I must have spent entire days on there.
I guess LLMs are the modern, ultra personalized version of that. Internet average, westernized culture, infinite supply, instantly. Just add water and enjoy a normal view of the world, no matter your surroundings or how you grew up. This is going to help out so many kids.
And they're not evil yet. Host your own LLMs before you tell them your secrets, people.
It was great to finally have a normalized, standardized world view to bounce my own concepts off. It was like an advice column in an old magazine, but infinitely big. In my early 20s I must have spent entire days on there.
I guess LLMs are the modern, ultra personalized version of that. Internet average, westernized culture, infinite supply, instantly.
That's a really interesting way to put it and actually made me look back at my own heavily internet-influenced upbringing. Setting healthy personal boundaries? Mindfulness for emotional management? Elevated respect for all types of people and ways of life beyond what my parents were exposed to? Yes. These were not automatically taught to me by my inherited culture or family. I would not have heard about them in a transformative way without the internet. Maybe passively, as something "those weird rich people" do, but not enough to become embedded in my mental operating system. Not to disparage the old culture. I still borrow a lot from it, but yeah I like westernized internet average culture.
I’m in the same boat. And judging by the people I’ve met at Google and FB (before it was Meta) a lot of us are refugees from conservative minded illiberal cultures within North America, Asia, and Europe. Memes are our currency. A lot of the internal cultures of these two companies are steeped in formative memes of those born in the mid-80s who only had the internet to find their people in the early 2000s.
Although I agree with you and GP, there are cynics who will say: "Ha! You think that the totality of Reddit posts is some kind of normalized, standardized, Internet average world view? HA!" There are people deep in ideological bubbles that think Reddit is too liberal! or Reddit is too young! or Reddit is too atheist! or other complaints that amount to "The average Internet-Person doesn't match what I think the average should be!" and they would not be interested in using that ISO Standard World View for anything.
I have a feeling if there is a market for this kind of LLM sounding board, the software writers will need to come up with many different models that differ ideologically, have different priors, and even know different facts and truths, in order to actually be acceptable to a broad swath of users. At the limit, you'd have a different model, tailored to each individual.
I was also quick to dive into early internet forums and feel like I got a lot out of them, but LLMs just seem different. Forums were a novel medium but it was still real people interacting and connecting with each other, often over a shared interest. With LLMs none of the social interactions are genuine and will always be shallow.
I'm sure some nerds will continue to host their own models but I would bet that 99.9% of social-type LLM interactions will be with corporate hosted models that can and will be tweaked and weighted in whatever ways the host company thinks will make it the most money.
It all reminds me a lot of algorithmic social media feeds. The issues were forseen very early on even if we couldn't predict the exact details, and it's an unsurprising disappointment that all of the major sites have greatly deemphasized organic interactions with friends and family in favor of ads and outrage bait. LLMs are still in their honeymoon phase but with the amount of money being plowed into them I don't expect that to last much longer.
>I think we're heading into a world where a lot of the remaining normal healthy social behavior gets subsumed by LLMs pretending to be your friend or romantic interest.
I loved when Gemini called out what I thought was a very niche problem as classic. I think there are very few people attempting this stack, to the point where the vendor's documentation is incorrect and hasn't been updated in two years.
"Ah, the "SSL connection already established" error! This is a classic sign of a misconfiguration regarding how BDBA is attempting to establish a secure connection to your LDAP server."
I spent a good half hour "talking" to 4 mini about why Picard never had a family and the nature of the crew as his family despite the professional distance required. It really praised me when I said the holodeck scene where Data is playing King Henry walking amongst his men and I felt pretty smart and then realized I'd not actually garnered the admiration of anyone or anything.
I think there's a similar trap when you're using it for feedback on an idea or to brainstorm features and it gives you effusive praise. That's not a paying customer or even a real person. Like those people you quickly learn aren't too useful for seeking out over feedback because they rave about everything just to be nice.
> “…I found them pretty off-putting coming from a computer that of course couldn't care less and isn't a living thing I have a relationship with.”
I’ve prolly complained about it here, but Spectrum cable’s pay by phone line in NYC has an automated assistant with a few emotive quirks.
I’m shocked how angry that robot vote makes me feel. I’m not a violent person, but getting played by a robot sets me over the edge in my work day.
Reminds me of a BoingBoing story from years ago about greeter robots being attacked in Japan. Japan has a tradition of verbally greeting customers as they enter the building and large department stores will have dedicated human greeters stationed at the entrance. IIFC this was a large store who replaced human greeters with these robots. Rando customers were attacking these robots. I now know how they feel.
After giving me continuous wrong answers ChatGPT decided it would try allow me to indulge it in a "learning opportunity" instead.
I completely understand your frustration, and I genuinely appreciate you pushing me to be more accurate. You clearly know your way around Rust, and I should have been more precise from the start.
If you’ve already figured out the best approach, I’d love to hear it! Otherwise, I’m happy to keep digging until we find the exact method that works for your case. Either way, I appreciate the learning opportunity.
Which reveals strong reasons to suspect strong "parroting" qualities. "Parroting" qualities that should have been fought in implementation since day 0.
I always laughed when ChatGPT would reply with the same emoji I typed, regardless of context. Not sure if parroting exactly, but I assumed it would understand meaning (if not context) of emoji?
Its fun to pick it apart sometimes and get it to correct itself but you often would never know unless you had direct or deep cut knowledge to interrogate it from
And we have tackled the issue long ago with education.
What are you trying to imply? Imitating fools or foolery is not a goal, replicating the unintelligent is not intelligence - it is strictly undesirable.
Yes! I've often said "software engineers should be doomed to use what they create. Or at least watch others try to use it." One example is our local Costco parking garage. They replaced the old push-button ticketing kiosk (which had nothing wrong with it) with one that had a touchscreen. Many times the line is backed up and one day I saw why. The guy was pushing the touchscreen button as if it were physical, and it wasn't registering the tap. He was using multiple fingers and mashing instead of using one finger and doing a clean tap inside the digital button.