In their conversation, they bring up mobile as a recent platform shift that caused a lot of disruption. But I think it's interesting to remember how slow that took. The iPhone was released in 2007, but the real winners of mobile were launched years later -- Uber (2010), Snapchat (2011), and TikTok (2016) -- and those winners took several more years to even start to gain true traction in the market.
I don't think a lot of people back in 2007 could have predicted that the biggest thing to come from mobile would be an app that let teens remix music videos and share with their friends.
This is why I think it is a little pointless to try and create mental models for what products and features to build to capitalize on AI (though it can be fun). It's so early that we're not capable of understanding what's possible yet. If anything, we're probably at the viral "fart app" stage that mobile was in for its first few years.
>I don't think a lot of people back in 2007 could have predicted that the biggest thing to come from mobile would be an app that let teens remix music videos and share with their friends.
I think that the biggest thing to come from mobile was always available location, exemplified by Google Maps which already existed before mobile happened. Many of the most successful apps relied on this.
TikTok is different, in that it could have existed on desktop (but would have looked very different) whereas Uber (for example) definitely couldn't.
I don't use TikTok so maybe I'm offbase, but I tihnk while technically it could've existed without mobile, socially I'm not so sure.
I wouldn't under estimate the amount of friction reduced in having an app on your phone which a) is always with you, b) can also record top quality video and c) has a data plan good enough to upload there and then.
> To take it a notch further, it's the video editing simplicity that really blew TikTok up.
Yup, definitely. Effective mobile editing was a big driver of it's success.
That being said, while it's definitely more effective and popular on mobile how much of that is just down to more people using the internet on mobile relative to desktop.
This is unlike Maps or Uber which only make sense with mobile and always available location.
Not exactly tiktok, but something similar was tried on desktop - Dailybooth was a YC startup which enabled users to take a photo from the desktop website and post it.
> I wouldn't under estimate the amount of friction reduced in having an app on your phone which
I wish I could do software development from my phone the way people casually watch tiktok videos. The least unproductive thing I can currently do when trapped with only a phone is read HN.
I don't think that Uber through SMS would be strong enough to compete with taxi services.
One of the core features that made me use Uber was the map - I could see where the driver is going and how far away is he. Also, the app is localised into language that I can understand and I can see the price upfront without having to worry about getting scammed. Recently I had to book a taxi over the phone at the end of the world (literally - Ushuaia) in a language that I can barely speak and the experience was rather stressful in comparison with using an app.
Seeing a price up front is just a different business model and localization could be easily handled with a setting on your account. Neither is tech-dependent, both could have worked over SMS too.
Real-time mapping would be tricker if that's really a killer feature for you.
For Uber though there's the driver side experience as well as the customer side. I don't know if you could've made something seemless enough for drivers to use from an old feature phone
TikTok's success is primarily about the creator tools, not the viewing experience. Making video editing mobile-friendly and accessible to more people is what enabled the proliferation of short-form content.
I am constantly surprised of how prescient my Media studies professor was back in 2007 about how everything has shaking out since then. "Your data is valuable don't give it away" is ringing in my ears as I give all my data away to openai
Individual data is extremely valuable, they wouldn't collect and store it individually if all they wanted was the aggregate.
Identity-specific data is sold all the time for everything from advertising to credit scores and what would otherwise be called social credit scores of they were run by the government instead of private companies.
I'm not really sure what would be considered cheap vs valuable here so I don't have a great answer there. My point was only that individusl data is collected and sold often, it must be valuable or there wouldn't be a market.
I'd be curious to see how the price compares when selling 10k individuals' data versus aggregate data the 10k people. Presumably if it was cheaper to buy all the individual data I would do that and aggregate it myself.
The data is only useful in aggregate, but different people/use cases require different types of aggregations. Using pre-aggregated data is difficult, because it almost certainly hasn't been aggregated in the way that's convenient for whatever analysis you're trying to do with it.
The aggregate data is often more useful in commercial use cases, but plenty of use cases need the indivual data as well.
Private investigators, three letter agencies, and any company wanting to send mailers to my new address when I move all need the individual data to target me specifically.
I totally agree the aggregate data is given more value in a commercial market heavily focused on advertising and now training LLMs, my only point was that there are markets that highly value individual data as well.
Sure, it is fractionally "valuable" in the sense that it is worth some tiny percentage of the large datasets it belongs to that get purchased for significant amounts.
I think the point stands that it wouldn't be easy to sell your individual data, and even if you could, it would be for a pittance from someone who is looking to build a large dataset. They certainly wouldn't be giving you an amount that justifies advice like "hold on to your data".
>What if I want to keep my history on but disable model training?
We are working on a new offering called ChatGPT Business that will opt end-users out of model training by default. In the meantime, you can opt out from our use of your data to improve our services by filling out this form. Once you submit the form, new conversations will not be used to train our models
Sure it does. A secret stops becoming a secret once you give it away to enough people. I feel like you're trying to justify piracy but picked the wrong argument.
I'm talking about personal data. Piracy is rarely a concern here.
(And btw, piracy only makes the data less valuable, because you lose the ability to sell it to someone who already has it. Not because the game or movie etc would become less enjoyable.)
> But I think it's interesting to remember how slow that took. The iPhone was released in 2007, but the real winners of mobile were launched years later -- Uber (2010), Snapchat (2011), and TikTok (2016)
And really something more like 2010 is probably a better date for when the iPhone really took off with the 3GS generation. There were obviously earlier smartphones as well (Blackberry, Treo) but they weren't really mainstream devices and didn't especially enable third-party apps.
So, yeah, there was a maybe five year (and certainly less than ten) period when smartphones went from a fairly niche thing to ubiquity which is very fast compared to technology adoption generally.
Might be worth looking at the limiting factors that caused the delay between iPhone (2007) and the advent of Uber, et al.
Then overlay that with the limiting factors for AI/LLMs.
I had a third-party logistics startup in 2007 that I would have loved to turn into Uber for shipping, but it would be at least a few years before the cell networks and ownership of smartphones reached a point where it was possible.
I don't know if there are limiting factors that will delay LLMs potential to disrupt the status quo.
> I don't think a lot of people back in 2007 could have predicted that
the biggest thing to come from mobile would be an app that let teens
remix music videos and share with their friends.
That's funny for me to read almost 15 years after rjdj [0] There is a
difference between an idea that is possible, and an idea whose time
has come.
As with all Hype Cycles; stick the buzzword into your company name (see "dot com" for examples) work it into every sentence of every conversation that you have and benefit from the torrential flow of investment chasing said buzzword.
Ignore all else and get the company name infront of the cheque books as quickly as humanly possible. Product-market fit, MVP, bootstrapping and stealth are naughty words that have no place in Hype Cycles.
No further strategy required, however, for the advanced entrepreneur - be aware that all cycles have a bust phase - and this time it is not different.
(unless you wrote blog posts and "content" - in which case copy the article you wrote about Product Strategy in the Age of NFTs a couple of years ago and swap the crypto for ai - you will get lots of clicks and nobody will notice)
Yes I find myself wishing that "AI" wasn't the term chosen for generative text and image tools, as this term has been pushed by culture/movies/games for the last half-century to mean something specific, not just (still very impressive) tools to generate images and text.
Of course, this could have never happened, as the temptation of "AI" for marketing products is irresistible and the confusion between "real AI" and "new generative LLM tools" is actually beneficial for companies.
I don't know what will happen to the term "AI" after this hype cycle dies down.
Yes I think they can pass the Turing test, yes I think this was a poor test to begin with if we are attempting to clarify what machine intelligence is and how it compares to human intelligence.
I genuinely do not understand how this attitude is so common on a site with so many experts. Surely two things being indistinguishable is a step change if we’re trying to compare them?
The Turing Test is a test of mimicry, not of identity. It's based on a metaphysics that says appearance = reality, which has all sorts of issues. There are plenty of ways we can distinguish humans from machines and I expect this other "background information" (primarily biological in nature) will play an increasingly important role. Especially embedded cognition and the gradual realization that intelligence is embedded in its environment, not some kind of abstract, external entity.
Not sure why I’m responding to a comment that says “get over it lol”, but:
Artificial Intelligence is an academic field that goes back nearly a century. It’s kind of a serious thing that intelligent people have spent careers thinking about, you know?
There are only two things necessary for something to be called Artificial Intelligence and it's pretty obvious what those things are.
There is no evidence, basically none whatsoever that logic can model general intelligence, narrow real world intelligence or that general "perfect logical reasoning" is a thing that actually exists in the real world in real world relationships. None.
No animal we've observed does it. Humans certainly don't do it. The only realm this idea actually seems to work is Fiction. And this was not like for a lack of trying. Some of the greatest minds worked on this for decades and some people still don't seem to get it. Logic doesn't scale to all reality. It just breaks. Anything other than clear definitions and unambiguous axioms and it falls apart.
I'm not saying Logic wasn't useful or that the things it did produce that worked shouldn't be called AI but the idea that you shouldn't call LLMs AI because of some hoped scenario that never seemed to leave the realm of fiction is just extremely silly.
In this instance, Logic systems are that guy in the stands yelling that he could've made the shot, while he's not even on the field.
This is all making me realize investors are just people with money (not these super intelligent market predictors that they're often portrayed). You'd do the same to market a product as you would to skim off the flood of money going into a hyped field I suppose.
> not these super intelligent market predictors that they're often portrayed
If that was true there wouldn't be that many funded failed startups. In fact there might not even be startups at all. The investors can take the funds and just find someone to do their binding instead of randomly "investing".
Just out of curiosity I googled the title with AI replaced by Blockchain and viola - hundreds of similar type articles that have no meaning nowadays. I know it's a stupid comparison but still funny to me.
Blaming interest rates is for losers with blocks, chains, tokens and metas in their name.
Interest rates were high in the "dot com" era and that didnt stop them. If you raise 100mil now you can make your payroll and your AWS bill from the interest alone.
To me, these brainstorming sessions theorizing the future of AI tools always miss a key thing, which is that human beings are still human beings. They don't follow logical rules of adoption and they often rebel against the things you force them to do.
For example, they talk about AI-generated copies of your voice becoming the way people communicate with each other. But who wants to listen to a computer copy of someone else's voice? No one. Maybe it will replace the pizza shop guy answering the phone, but it certainly isn't going to replace real conversations between friends and family members.
I saw another app that uses a small number of family photos to generate the surrounding scene where the photos were taken. Again, it's just a gimmick – family photos have value because they are memories of real events, not because of the intrinsic nature of the photographic paper.
If I were a betting man, I would bet on a major backlash to this sort of "automate everything" approach and a serious counter-culture to arise in the next decade or two.
I think most of the people hyping it up are just thinking about the pure money-saving/generating side of it. You can automate away entire teams of people, imagine all the savings! You can generate infinite content with 2 clicks for basically free, think of the money to be made there!
They of course ignore the fact that most people don't want to be talking to a robot and would take the human any day of the week. And most people create things as an outlet for their creativity or whatever else, not (solely) as a way to make oodles of money.
The company I work for provides tools for Support teams, and there's been talks from the higher ups about "automating away 90% of conversations", which basically translates to us auto-closing 90% of all incoming messages for our customers based on some "AI" decisions. The only people who buy into it are the CEO/CTO and their direct underlings, everyone else in the company realizes how fucking stupid and shortsighted that is, but they don't care. It's the big hype thing, all the competitors do it regardless of how idiotic it is, and our customers want to get rid of as much human labor as possible.
It's pretty clear the implementation of these 'AI's is not solving a user problem. At my workplace we have mountains of user requests for maybe ~5 key features from years ago that no staffing has been assigned to; instead, the PMs and VPs and Directors are focused on shiny new features (yes, like 'AI') that they can put into marketing materials and get promoted. It's all hype. I have never seen a single customer request for the 'AI' features that these multi million dollar engineering teams are working on now.
> But who wants to listen to a computer copy of someone else's voice? No one.
I think you underestimate the cultural desire and pressure for a perfect presentation of one's self. It started out with mass marketing, where every advertisement and authorized photos of celebrities published in the last 50 years are in some way retouched, cleaned up, photoshopped. This cancer spread to social media and metastisized it with filters. Now Zoom by default smooths my patchy face. The next logical step is basically VTube but with your own face instead of an avatar. Conventionally attractive people have huge advantages, after all. If it starts to be normal, then those who don't will be disadvantaged. Maybe family calls are different, but in professional settings where you're trying to influence others, it's an advantage.
Meh, David Foster Wallace made this prediction thirty years ago and I don't think it's actually come true. Most people simply...don't care. The majority of influencers and other people who are on camera for a living are just average looking folks.
| But who wants to listen to a computer copy of someone else's voice? No one.
Id imagine a lot of people would be up for doing this if it improved call quality..
If you could reconstruct a persons voice from text in realtime locally on your phone you would only need to transmit compressed text during the call reducing the bitrate a tonne. And those volume issues and difficult to remove voice artefacts would cease to be a problem too (assuming the senders side can still transcribe correctly).
Here is the problem. Every investor is going to ask what your moat is. What differentiates your Whisper -> Llama -> Midjourney Pipeline.ai from the next one? And the answer is, if you’re just making API calls, nothing. Sorry. There’s nothing stopping Jian Yang from creating newpipeline.ai in a weekend.
Here’s a couple of things which could set you apart off the top of my head.
Customers. Having customers is an advantage over the next guy who doesn’t, because now you can start customizing your product for unique needs rather than having a generic crud app.
Custom models. A custom model means some kid can’t just replicate your app easily.
Unique data. Data which is infeasible for another company to acquire or replicate.
Special people. People who will give your startup an edge in creating all of the above.
>There’s nothing stopping Jian Yang from creating newpipeline.ai
As someone leading a product team which "only calls apis" in this context: this is a very premature take. Hear me out :-)
Robust LLM-powered software requires
* very thoughtful design of prompt templates
* understanding of top_p and temperature in the context of said templates and their parameter space
* very thoughtful design of representative test cases for a given combination of prompt template and api params. without these, you're not even able to reason about the value range of the function you're developing
* execution and evaluation of those tests
* maintenance of all above
...and that's just talking about ensuring the desired output types in one closed context. I won't go into the creativity required to solve more complex problems (content injections, for one). Let me just say this: I won't lose sleep because anyone could just replicate our applications. The opposite is the case: I invite anyone to try and catch up. Good luck with that.
What you wrote might apply for prototypes of zero-shot applications, but not for production-ready software, letalone production-ready software that solves problems which involve more than one isolated LLM-call.
I don't think either of you are entirely wrong but I think it can be broken down more generally.
There are very few web applications that have any real proprietary implementations that are impossible to replicate. Its a combination of factors that builds the potential moat to the business.
I think the point of the parent was that you can build expertise in all of those in a relatively short time frame (esp. when more developers will start building up experience), compared to acquiring a large customer base or a large dataset.
Any tips on building robust LLM software? And yeah I agree on how important test cases are. Having some sort of objective benchmark for judging how effective prompts are is really useful.
One reason why we are very happy with our results is because the person in charge of experimenting with the LLM-flow, while highly intelligent, is not talented with languages (honestly, he's the opposite).
This forces him to come up with creative solutions where others might get more mileage just with flawless prompts. Thanks do this, we discovered some really interesting tricks which help us solve problems that the available literature does not discuss.
Based on the flows he designs, I revise his prompt templates for more precision and token efficiency.
>And yeah I agree on how important test cases are. Having some sort of objective benchmark for judging how effective prompts are is really useful.
Its not just about effectiveness, but about ensuring that no false-positive-inferences make it into production: for a given prompt template, deeply investigate which edge cases of input data would lead to false positives. then, adapt the template and api-params until the unit test has 100% success.
If you must, there is a continuum of tasks that range from suitable to risky in production settings.
Most definitely choose things on the suitable side of that scale (Eg - text generation, or classification).
More complex tasks like Data to text or Summarization? I personally would always avoid it, except if there are certain very specific workflows for your team/task.
Further, Its not just test cases - its an entire evaluation and prompt versioning layer. Of the few that I am aware of, most are not even openly available (Including Azure Prompt flow)
Totally agree on that. But I think this is the natural way for AI business to grow.
A) You build a MVP/PoC on top of an existing model to quickly find the value prop and product/market fit
- That won't build a moat and won't be efficient to tackle the problem at hand (as laid out by the No Free Lunch Theorem), but will help you zero in on the opportunity
B) Once you have this sorted out you build your custom model with your unique data to build your moat and perfect your product
The right time to go for funding, in my opinion, would be closer to moment B, as building that custom model is where you'll need good capital (specially to hire the talent who will help you do that).
One realization I had is that tech advantage might in fact become a disadvantage. Consider companies that have invested heavily in building a technological edge. Google Translate, for instance, faces challenges as a simple prompt can overshadow its billion-dollar product. Similarly, Grammarly's competitive edge may now rely more on its momentum and user interface than on its underlying tech. As ChatGPT introduces new capabilities, countless products see their technological edge vanish. To illustrate, the introduction of the image input feature means that, with a single prompt, it could serve as a top-tier school homework assistant, a photo-based calorie counter, and a plant identifier all at once.
This dynamic raised into question the viability of ML research as a core business strategy. Take Midjourney, for example. They've made significant strides and achieved dominance with their advanced text-to-image generation technology. But if a product like DALL-E 3, or its successors, could render their entire offering redundant in a few short years, than it's a tricky path for a company to take.
To me, this suggests that the actual "new strategy in the age of AI" is that tech companies need to transition from relying on their tech edge as their competitive advantages, to relying more on more stable moats. For example, the network effects rooted in two-sided marketplaces. It also hints that tech giants like Google, who above all relied on their tech advantage, could face existential challenges in the coming decade. A sort of a win-or-die situation. While companies like Amazon might be in a more stable ground for now.
Midjourney basically used RLHF on their model, so they have a bit of a data moat in terms of human aesthetic preference, but DALL-E 3 isn't bad in terms of aesthetics and its prompt adherence is vastly superior so that preference moat might not save them. They'll need to improve prompt adherence quite a bit to stay relevant.
Data is the new oil in the age of AI. The companies that do well will have products that siphon context enriched user behavior, build a strong brand with user loyalty, and effectively capitalize on the collected data to automate some expensive task. These data collection apps will be designed to break down and gamify tasks in such a way as to maximize the training value of the resulting data stream.
For example, imagine an IDE with an integrated stack overflow type service, where people could do collaborative coding or request help and get answers inside the application. That would give edit-by-edit updates, console output, problems with solutions and user solution preference. The company that owned that data would have a huge leg up on the competition in terms of creating AI software generation tools.
I've been saying this a lot over the past year as people obsess over 'moats' and whether a future model will make an idea obsolete, or whether it's even worth getting into something because Big Co is already working on it, or will be working on it:
You learn by doing.
There's so much value in actually making something. People forget how much is in the details, or how much something like good design can differentiate.
You can sit on the sidelines forever thinking that your idea isn't 'different' enough, but the ones actually making stuff, listening to users, gaining the end-to-end experience, will actually have a larger 'luck surface area'.
Even if your idea gets taken, or someone comes along and does it better, or cheaper – there's value in _trying_.
Specifically regarding AI: the models existed for quite a while, for free, for anyone to use in OpenAI's Playground. But suddenly they hooked it up to a chat UI and it blew up completely. You never know what the key thing is going to be. But if you sit around forever, you're guaranteeing failure.
This article praises AI for improving products, but what about the jobs it might take over, like customer service roles?
And can AI truly understand human emotions to handle sensitive customer issues or create artwork that resonates with people on a deeper level? There might be more to consider than just the cool tech.
I have experienced this as well, also from higherups. They demand I interact with AI. What's the user problem it's solving for me? That's never actually defined.
The reason there is so much debate on Gen AI is due to the emergence of unpredicted abilities.
Yes GenAI is insanely impressive - this is not a luddite argument. I have personally spent months on it, and continue to do so. ITS AWESOME.
However it really isnt going to do a tenth of the things people expect it to. Those emergent properties seem like actual reasoning, planning, or analysis.
Get to production data though, then the emperor has no clothes. Those "emergent" skills end up showing you how much correlation in text is good enough - provided the person reviewing the text is already an expert.
> While AI can streamline tasks in SaaS categories like sales and customer service, offering relief from repetitive work, the impact on project management is more nuanced.
Every single AI hype article includes some version of this sentence! "While AI can be helpful for the repetitive boring work that other people do, the impact on MY work is more nuanced."
It's basically the face eating leopard meme. "AI wouldn't automate MY job" says president of AI job automation company.
My personal take as a PM is to find narrow opportunities for the tech where it makes most sense and poses the least risk. One project we are looking at it to tune a model against our website with URL links to make a more natural search function given the utter labyrinth of a website we currently have. Not insane given the two search giants are applying the same idea. We can reduce hallucination by validating URLs it produces before hitting the user. Also build up a list of questions and not just search queiries.
Other avenues are more human accelerators than replacement. I have been around long enough that I if a tool presents a risk to someones job the tool often gets thrown down the stairs "accidentally". GE bought in hard to Google glass back in the day and tried having it walk through procedures for complex repair processes. A great idea if literally anyone in the field asked for it.
I'm with many that the hype train hit hard for "AI" and block chain, but LLMs for me do have real value and real application for some excellent use cases. I also find it an excellent sounding board for my own ideas, though the models tend to not want to disappoint you.
i work for a large consulting firm that is cheerleading LLMs very hard. I've yet to see a use case beyond a better chat bot and knowledge base search. I've also yet to hear about a large client putting an LLM in production, it's all been experimental and the total sum earned on generative AI projects in our firm last year was < $500M
I'm not saying the hype isn't real but i'm definitely skeptical.
edit: for context my firm screamed to high heaven how the whole metaverse thing was a game changer too. I called that one BS right out of the gate.
Honestly the way the models are improving I don’t see a ton of “product” work needed anymore. Any UI is worse than a good AI assistant that I can chat/talk to. Why do I need your forms and data rendering if the AI assistant is smart enough to figure it out on its own?
I am a lot more pessimistic about the startup scene in this area.
Look how ChatGPT can teach languages[1], good luck building an AI powered language learning app…
It gets worse for startups because Google and OpenAI have a ton more context about me. For example in the language learning conversation Google can refer to my spoken samples from other places to improve the experience. And yet, no PM at Google needs to think of this, they only need to hook up the data and throw more compute at their models.
> Any UI is worse than a good AI assistant that I can chat/talk to
Speak for yourself. I have zero interest in having a conversation with the computer, whether typing or (especially) speaking out loud.
Product and UI and UX work will continue to be valuable; if anything, good quality work will stand out even more amongst the oncoming tidal wave of low quality AI/LLM stuff.
> Any UI is worse than a good AI assistant that I can chat/talk to.
Good lord no. Never. Not in a trillion years will I be okay with interacting with these data hoovering blackboxes rather than just fucking clicking on the button with my mouse.
Hence why Bill Gates said the biggest winner in AI will be the first company who can build a true AI assistant.
I thought the same and agreed with him.
Everything we do now is just working towards that super assistant.
I think in the future, your "phone" is just an interface between you and your assistant. Not much else. As an Apple shareholder, this is my biggest worry. iOS is suddenly a lot less necessary.
It's interesting, I feel excited as I haven't felt in a long time because wherever I look, there is a possible side project where I can explore AI and LLM's. On the other hand, I would feel very scared to try to transform any of them into a business due to some of the issues discussed in the article.
They mention that “mobile-first companies killed companies that weren’t mobile-first”, can anyone point to a good example? My memory is that “mobile first” was a big hype cycle. A few years later everyone fired their native mobile teams because they realised most businesses don’t actually need a mobile app.
Agreed with this sentiment. I personally find it funny that Craigslist is still very much alive and kicking, and is my go-to for apartment hunting amongst other things, despite an underwhelming mobile experience and a design that hasn't been updated in literally decades.
I don't think a lot of people back in 2007 could have predicted that the biggest thing to come from mobile would be an app that let teens remix music videos and share with their friends.
This is why I think it is a little pointless to try and create mental models for what products and features to build to capitalize on AI (though it can be fun). It's so early that we're not capable of understanding what's possible yet. If anything, we're probably at the viral "fart app" stage that mobile was in for its first few years.