Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Just words (YC W24) – Optimize your product's copy for user growth
87 points by nehagetschoice on March 4, 2024 | hide | past | favorite | 77 comments
Hey, HN! We’re Neha and Kunal, co-founders of Just Words (https://justwords.ai). We help companies gain users through product copy optimization across any app surface or webpage, using inferences from past performance data. Here’s a demo video: https://youtu.be/e6PuRzHad7M, and a live playground: https://justwords.ai/live-demo.

“Copy” in this context means short-form content such as landing page titles or email subject lines. Small and seemingly insignificant changes to these can lead to massive growth gains. We observed this on many occasions while working at Twitter. Tweaking a push notification copy from “Jack tweeted” to “Jack just Tweeted” brought 800K additional users to Twitter.

However, coming up with the copy, testing it, and optimizing different variants across users would take us forever—sometimes months. There was no methodical way to optimize copy on the fly and use what we learned for subsequent iterations. The entire process was managed ad hoc in docs and spreadsheets.

After experiencing this pain at Twitter, we observed the same problem at other companies like Reddit. After years of this, we are convinced that there’s enough evidence for this pain across the industry.

In our experience, the main challenges with copy optimization are:

Engineering effort: Copies are hard-coded, either in code or config files. Every small change requires redeployment of the app. To run an A/B test, engineers must write if/else logic for each variant. As a result, one copy optimization becomes a 2-week project on an engineer’s roadmap, and companies are only able to prioritize a small number of changes a year.

Fragmented content: There is no copy repository, so companies lose track of the history of changes on a particular string, learnings from past experiments, and their occurrences across the product. With no systematic view, product teams make copy changes based on “vibes”. There is no way to fine-tune next iterations based on patterns obtained from previous experiments.

Lack of context: Companies either test 1 copy change at a time for all users, or rotate a pool of copies randomly. In an ideal world, they should be able to present the best copy to different users based on their context.

We built Just Words to solve these problems through 3 main features:

No-code copy experimentation: You can change any copy, A/B test it, and ship it to production with zero code changes. All copy changes made in our UI get published to a dynamic config system that controls the production copy and the experimentation logic. This is a one-time setup with 3 lines of code. Once it’s done, all copy changes and experiments can be done via the UI, without code changes, deploys or app releases.

Nucleus of all product copy: All product copy versioning, experiment learnings, and occurrences across the product are in one place. We are also building integrations to copywriting and experimentation tools like statsig, so the entire workflow from editing to shipping, can be managed and reviewed in one place. By storing all this in one place, we draw patterns across experiments to infer complex learnings over time and assist with future iterations.

Smart copy optimization: We run contextual Bayesian optimization to automatically decide the best-performing copy across many variants. This helps product teams pick the winner in a short amount of time with one experiment, instead of running many sequential A/B tests.

We are opening up our private beta with this launch. Our pricing is straightforward - a 60-day refundable pilot for $2000 (use discount code: CTJW24), for one of the following use cases: landing pages, push notifications, email subject lines, or paid ad text. We will show visible growth gains to give you a substantial ROI on your pilot and refund the amount if we fail to deliver on it. We are inviting companies with >2K monthly users to try us out here: https://forms.gle/Q3xthubQFfZcXZe88. (Sorry for the form link! We haven’t built out a signup interface yet because our focus is on the core product for the time being. We’ll add everything else later.)

We would love to get feedback from the HN community on (1) the flavor of problems you have experienced in the world of copy changes, (2) how your or other companies are solving it (3) feedback on the product so far, or (4) anything you’d like to share!




Basic jist of an exchange I had yesterday:

"Healthcare, infrastructure, K12 education and other essential services are a mess because we lack the talented resources to solve those fundamental problems"

"We largely do have the resources, it's just that they're all trying to get people to click ads, getting people to waste time endless scrolling on Reddit or Twitter or Instagram, managing their brands' social media accounts, etc."

This is not a problem in need of a solution. You both seem like really talented guys, and it's depressing that this is what you've decided will help make the world a better place.

That you're part of the YC24 cohort says a lot about what's wrong with the tech industry's priorities right now


What's in common with those industries is that they are largely regulated or government-controlled, making them much more difficult to break into.

Don't hate the players, hate the game.

If we deregulated there would be a rush of innovation and the next YC cohorts would have many more startups targeting those industries.


>> This is not a problem in need of a solution. You both seem like really talented guys, and it's depressing that this is what you've decided will help make the world a better place.

>> That you're part of the YC24 cohort says a lot about what's wrong with the tech industry's priorities right now

Couldn't agree more, it seems that there's a big disconnect of what startups are getting founded and what would actually change the future.

Personally this ranks somewhere in-between web 3.0 and generic AI <feature>.


Coming from healthcare, it's a long slog with few rewards. Can't really recommend any of those sectors tbh


Who's to say that justwords won't be used to help increase the adoption of some critical infrastructure or educational program? When building a useful tool, it can be distracting to think about the problems outside of your field of influence.


Somebody's gotta do it. Why not them? We can't all be cancer researchers; somebody's gotta keep the lights on. Ads are a big part of that.


May be they make millions and spend give away in philanthropy. Why villanize them? If it is a good business, it is a good business.


Can you discern when copy is becoming click-baity? As content on the web becomes more click-baity, I've naturally developed mental filters. For example, when I see "<Well-known person> just...", I automatically stop reading. Becoming click-baity certainly works for particular audiences. Are you just not interested in audiences that have built up similar mental filters?


70% of Americans don't use ad blockers. Jake Paul, a low comedy YouTuber, has 20 million subscribers. A recent popular rap song had lyrics of "You think you're the shit but you're not even a fart". All of these people have money to spend and $1 of theirs is of equal value to $1 of yours and mine.


Have you thought about a specialized product to automate A/B tests for form submission? Not just copy, but also layout and which fields you collect. Direct response marketers and leadgen services could really use this.


Interesting. One of our customers just requested this today, where they want to test form fields. Are you imagining higher form submission conversions by optimizing the number, type, and layout of the fields?


Yes ideally closing the loop on the value of those conversions as well

The pitch would be “automatic ab testing for your form submissions”. I talked to a few local lead gen companies years back and they thought it was a neat idea, I just never got around to building it.


A quick google search gives me several tools for this : Formstack, Omnisend, Zuko Form Analytics...


AFAICT Formstack and Omnisend are marketing automation and analytics services, not automatic split testing. I got out of the marketing space a while back but I do appreciate the cross check! However it doesn't look like those products are doing exactly this.


+1 that's my understanding as well.


I also think this could be a really cool chance to use the 1 arm bandit optimization algorithm, as it self-corrects so users don't even need data readouts.


Cool! What if there isn't one optimal copy though. Would this ever be tailored to an individual user (or group that a user is in) based off of information you know about the user?

So some sort of tracking knows that your opti-cop worked well on me for some other site you service, so then it tries a similar style for another site (who uses your service)?


Yes this can be tailored for individual users (or groups). To go more in-depth, we'll be using contextual bandits + more novel ways to optimize the best performing copy for user segments, that works on top of the anonymized knowledge about the user.


Can you explain more about this please. What are contextual bandits?


Contextual bandits are algorithms that decide which action to take (like displaying ads or recommending products) based on the current situation (context) to maximize rewards (clicks, sales). They learn over time what works best in each context, balancing new trials with proven options.


+1 a resource I linked in one more comment, sharing here as well.

This has been tried and tested a few times in industry, notably at Netflix and Duolingo for artwork and notifications

https://towardsdatascience.com/multi-armed-bandits-part-1-b8...


I'd like to learn more about ranking techniques, the bayesian approach mentioned in the intro as it pertains to this and other best practices. Can anyone recommend any resources please?


Yes, multi armed bandits have been tried a number of times, specifically for copy and UX optimization in industry. A good resource to begin with will be this blog https://towardsdatascience.com/multi-armed-bandits-part-1-b8...

A few examples from industry include: 1. Netflix artwork optimization https://netflixtechblog.com/artwork-personalization-c589f074... 2. Duolingo notifications optimization https://research.duolingo.com/papers/yancey.kdd20.pdf


Very cool! But maybe change copies to copy.

In my experience, copy is used the same way as code. And just as you probably wouldn't say, "I changed the codes for the three apps," most people wouldn't say, "I rewrote the copies for the three websites."


> you probably wouldn't say, "I changed the codes for the three apps,"

Unless you’re - shudder - a data scientist


The bottleneck with A/B tests isn't tooling, it's statistical ignorance in marketing professionals.

Not claiming you did this, but in your example of "Jack tweeted" vs "Jack just tweeted", it's so often bad statistics or cherry-picking that leads to results like this. Testing null hypothesis is hard, and many people will take a reality like this chart [0] and claim that their A/B test is successful.

I definitely think there's a gap in the market since Google sunset optimize, but $2000 seems kind of steep for something that many people got for free before.

[0] https://i.ritzastatic.com/images/3dcdd822dbd241b1b3ddeeb9540...


Neat product. Small thing on the demo: perhaps using Stripe in the demo isn't the most effective choice. Stripe surely has already A/B tested the heck out of their landing page; "Financial Infrastructure for the Internet" is (IMO) an incredibly strong tag line for the hero text. The alternatives generated by your tool pale in comparison.

Perhaps it would be more effective to put a lower-quality landing page in as your demo. Off the top of my head, something like https://www.intuit.com/ might work. Their existing tag line is "The global financial technology platform that gives you the power to prosper". Doesn't mean much to me - I'm sure your tool could give me some better options, which would serve much better for a demo.


Neither Stripe nor Intuit are converting users from their hero copy at this point. "Financial Infrastructure for the Internet" sounds like a diversification pitch to investors than anything else to me.

A good example would be a company that isn't an unavoidable juggernaut in its space and also isn't great at marketing. There are many open-source projects that would work because they don't have a big marketing team, but they might still be well known.


You'd be surprised how few companies actually do any systematic copy optimization - it remains ad hoc even at the big players, which is one of the reasons we started this startup.

In speaking to SMBs and large companies, our insight was that the problem of copy optimization resonates more with larger companies, as smaller companies are more focused on survival/basic marketing techniques like opening up new channels. Larger companies have already exhausted those levers, and are ready for more sophisticated optimizations.


> You'd be surprised how few companies actually do any systematic copy optimization - it remains ad hoc even at the big players, which is one of the reasons we started this startup... In speaking to SMBs and large companies...

Did you talk to Stripe? Do they do systemic copy optimization on their landing page, or not?


> Larger companies have already exhausted those levers, and are ready for more sophisticated optimizations.

Sort of. Companies copy changes at larger companies because they're addressing a different audience and different set of needs for that audience.

TL;DR - concrete language / early stage ==>> abstract language / larger enterprise


> "Financial Infrastructure for the Internet" is (IMO) an incredibly strong tag line for the hero text. The alternatives generated by your tool pale in comparison.

Yes! Look at the alternatives the tool generated:

* Maximize Revenue with Secure Transaction Processing

* Elevate Digital Commerce with Trusted Payments

* Unlock Opportunities with Secure Transactions

* Simplify Your Online Payments

* Accelerate Your Online Business Growth

These are just... bland. Generic. I've seen a thousand webpages with taglines just like these.

AI can be quite bad at generating unique responses. Its answers are middling. This can be great for, eg, coding where you want a copilot to generate known algorithms and approaches, but terrible for marketing where you want a slice of brilliance and genius.

Would turning up the AI temperature and other settings help, maybe?


Yep - the AI-generated copies in the first run are pretty generic - that's not the value add (sorry if the demo made it appear that way). The key value adds are (1) Quickly running an A/B test with no code (2) Running inferences based on the test data and feeding it back to the model to generate better copies in the next iteration(shown in the demo video)


I understand better now -- and I was focusing on the AI aspect because it's an interest. But I get the value prop and thanks for taking the time to explain :)


Love the idea of doing demos on landing pages that can more easily benefit from copy optimization. We will build the next one for Intuit!


Very slick, straightforward demo. Question on my mind: The AI A/B testing space seems to be getting crowded. In what way do you see Just Words standing out?


So far, we have seen 1) Companies like stat sig that are stand alone AB test platforms 2) Customer engagement platforms like customer.io that allow you to do A/B testing on their campaigns 3) There a few website only platforms like coframe and mutiny that do not integrate with internal products like notifications and logged in pages

The things are make us stand out -

1) There is no tool afaik that looks at inferential learning based on past experiment results on copies. The experiment analysis and the continuous feedback loop is missing. 2) To make (1) happen, the first step is copy versioning and management. Without a tool that can abstract strings in a CRM, and monitor learnings over time, it's hard to make (1) possible. 3) We integrate with tier 0 services like notifications with a low latency solution that makes copy iterations possible for in-product features, outside of logged out surfaces like web pages.

Would be interested to know if you have seen software that solves for (1), (2) and (3)?


Makes sense. Coframe (https://coframe.ai) comes top of mind


I’ve spent lots of time dealing with the pain of this in past growth roles so I’m excited to see this!

How are you handling messaging consistency for specific users? Ie. A user is shown an experimental string of copy with one value prop and you want it to show up multi-channel for them. Do you have a way to associate the experiments on a per-user level?


Every copy has a globally unique identifier and multiple copies can be changed in a single experiment, agnostic of channels. That way we only need to setup experiments on user level and as long as different channels are referring to the same experiment, consistent messaging will work out of the box for each copy across channels.

example: website_landing_page_title and email_subject_line can be part of the same experiment for a multi-channel experiment.


I'm curious to learn what kind of challenges you had in your past growth roles. It would be great to understand specific examples/workflows, and how you dealt with them.


One of the main ones that comes to mind relates to promotions. A good example is a fintech I worked at. We wanted to advertise specific offers to specific channels of customers, but we otherwise wanted communication with them to stay the same.

For example, we'd have the same opening email sequence, same retargeting, etc. for customers across channels but we'd want to have consistent messaging around offers (ie. "$1000 off when you sign up" vs. "first month free" kind of stuff). It was tricky because we didn't want to advertise the same offer to everyone, so we wanted to carefully segment who was getting which offer and keep it the same over a window of time.

Unfortunately we didn't have a perfect solution. The closest we came to it was to have an experiment ID tied to their user account. Then we had a system where we would define the different experiments (including messaging and promotions) with each experiment having an experiment ID. It was far from perfect but worked.


I'm no lawyer, but the statement "to productize the tooling we started building at Twitter" is troublesome to me. Do you have some protection from Twitter/X making a IP claim against this? I.e., can you prove you scrapped "the tooling we started building at Twitter" and began fresh?


The idea transpired from some of the early tools at Twitter. The actual product has nothing in it that would be considered Twitter's IP afaik. A couple of other companies we spoke to use similar techniques to optimize copy testing. Good call out on the language though - we could have worded it better!


> we could have worded it better!

An opportunity to dogfood your product!


"Good call out on the language though - we could have worded it better!"

A little ironic that this company is all about wording it better :).


The fact that they recognize when it can be better is a positive sign that they will keep getting better.

Now, if they took a defensive position, trying to justify their non-ideal wording, that would be ironic.


It will also be fossilized in the HN database if they don't delete it very soon.


I've seen copy tests within push/email notifications run very smoothly within my company because of internal tooling (in-house experimentation tool tied to Looker with relevant metrics). In-product copy tests however are not that straightforward but they don't take too long. Wonder how you think about integrating new custom metrics that a company cares about? Like the idea of having a repo of experiment learnings in one place.


Working at Optimizely definitely showed me the power of small copy changes. And LLMs are perfectly suited for generating text variations. Makes a ton of sense.


Have we considered a/b/c or even a/c c/a testing as hero optimal interval payouts yet? Think outside the box with me here.


How do you confidently report a “best version” on lower traffic sites where each variant may only see a couple dozen successes per day?


We either run experiments with higher traffic diverted to treatment (eg: 50% instead of 1%), or run it for a longer time. This is the reason why we need companies to have at least 2K monnthly users to run pilots with us.


Thanks! Cool idea on that front and in general; fun to see people taking copy seriously!

I am still a little fuzzy on how this might play out on a real site, since lots of monthly users doesn’t always mean lots of conversions (especially for one-time, high dollar transactions).

Let’s say I have a page that gets 100k unique visitors per month. I show them 5 different variants of a “nudge” widget. Some do better than others, but they all hover at <1% CTR.

While there may be a story to tell as far as winners and losers (e.g. mobile users converted to variant A at 2x the rate of desktop users), how do you confidently report a “winner” from what amounts to a few thousand conversions in a month?


Does the live demo page do anything? I just see a white page with the logo and "retool" floating button. On android chrome


We built it on top of retool, that currently supports only desktop sites.


Meta, TikTok and Google have all the data. Not even a sampling. They can exactly answer what is the most effective copy for possibly anything that anyone wants to do. No A/B test required.

So why don't they offer that?

I know it's not because the data is proprietary or private, because basically all the information you need is visible on Facebook Ad Library, more than enough to answer most questions about authoring copy by sheer mimicry.

You emphasize the UX here a lot. I don't know. I think Meta's UX is fine, end of story.

This isn't meant to be a takedown. It just seems intellectually dishonest. Anyone who has operated these optimization systems at scale knows that the generalized stuff you are saying isn't true. You're emphasizing a story about engineers versus product managers that is sort of fictional, like the right answer is the one that most companies are already taking which is to not A/B test at all, because it doesn't matter, and when you do see results they are kind of fictional.

And anyway, it belies the biggest issue of all, and this is actually symptomatic of Twitter and why things were so dysfunctional there for so long, long before they were taking private: you are saying the very Twitter esque theory that "Every idea has been had, and it's just a matter of testing each one, one by one, and picking the one that performs best." That was the core of their product and engineering theory, and it couldn't be more wrong. I mean if you don't have any experience or knowledge or opinions, why the hell are you working there?

> However, coming up with the copy, testing it, and optimizing different variants across users would take us forever—sometimes months.

The right answer is right in front of you! Don't do it! Don't optimize variants, it's a colossal waste of time!


I think even if you have all the data, it’s not always a science too because what works for one audience will not work in another.

HN is a good example of this. Headlines that are too outrageous or catchy do not get upvoted that much here but something simple like “I created a rust debugging toolkit” will likely get upvoted like crazy, while something like “I got laid off a day after I got pregnant. Here’s what happened” probably will get buzz on TikTok.


The danger is it's all local optima. A/b test shows lift on "i created a rust copilot" vs "rust debugging toolkit". Ultimately.. it's one big "so what" outcome. These are distractions. A product either has market fit or it doesn't.

The big companies are especially susceptible to these distractions because they have the budget to blow chasing micro funnel optimizations. It sounds reasonable, but in my experience i agree it's a waste of resources.

It's too hard to prove causality. Entire orgs are set up to run rigid experimentation analysis, and prove incrementality so we can trust the data. But that should be warning of just how complicated it is. and we can't 100% trust it. There's external factors. hence a button color and a line of text shouldn't make the cut list of priorities. it's not that significant.


Yes but just because you have product market fit doesn’t mean you shouldn’t try to optimize copy so that it reaches as big an audience as possible


Yeah - two themes here -

1) Is copy even important? I think it is. If this post was titled, "Auto-tune experimentation for short-form content optimization", it might make half the audience confused about the product. In fact, the 1-liner we use for HN is very different from when we talk to VCs, because the audience is different with different goals & backgrounds. I guess the point I am making is that messaging has to be contextualized, depending on users, platform, and goals.

2) PMF vs copy - I agree that the two are orthogonal. Copy is not going to solve for the lack of a PMF (and it shouldn't). Exactly the point above - the goal is to help more and more users comprehend what you do, hopefully in a way that's more personalized to them.


PMF isn't orthogonal to copy if you're experimenting with copy to drive an outcome. what is the outcome then? how do you measure it? isn't the state of the art conversion?

That's the challenge: conversion funnel is complex with many factors. and largest one of them, in simple terms, is PMF.

if we measure downstream like clicks or inbound leads etc, that's more aligned with "discovery of PMF" and that's good. But it should be stacked ranked as so, it's not driving the needle. it's exploratory.


More companies think they have PMF than really do. So the risk is they get funding, without fit, and can afford to set up data science orgs to prove out experiments and use non trivial resources running copy tests.

if justwords can make this trivial then at least it's minimizing the distraction. that's a win, and fwiw i think b2b wants this product, so the company can do well. i just don't think micro content optimization, after doing it for unicorn for 8 years, really moves the needle like people believe the data shows. People use PMF products in spite of their UX! (for example)


> I think even if you have all the data, it’s not always a science too because what works for one audience will not work in another.

Wouldn't 'all the data' by definition have the data for various audiences at least calculable?


Appreciate the candid comments and opinions here. I'll break it down and go over them - 1. Having access to data is not the problem we're after (most companies have the first-party data in-house). The key challenges are around having a platform that fundamentally separates strings (copies) from code, and lets you update them effortlessly, based on inferences from that data. So, I am not sure I understand why this is a Google/Meta product? 2. UX is not the value add from this product - agree with you on it (even if it appeared to be the emphasis). The ability to make scientific edits without re-deployments and accelerating continuous iterations based on user feedback, is what we are going after. 3. Curious why you think A/B test results are fictional? Getting stat sig results is probably the surest way to conclude results. Perhaps there's a different angle you are talking about here? 4. RE: don't A/B test at all. Given the number of users that get exposed to every change a consumer company as large as Twitter makes, not testing can be disastrous, which brings up another great point - Large companies are struggling to use all the (generic) gen AI content today, because it needs to be performance tested before it can be placed in front of millions of users, and that's not scalable today. 5. You may be alluding to another good point - copy is as much as art, as it is science, and writing it well takes context, quality, and expertise. That's something we hold a strong opinion on, and we don't see this or any other tool eliminating that expertise. The goal is very much to streamline and augment those workflows.


> So, I am not sure I understand why this is a Google/Meta product

Which audiences am I optimizing copy for? Where do they come from? Some Google, Meta TikTok or Apple owned channel right?

Google has indexed every website. Meta has every ad. Can't they just tell me what copy to use? Why don't they? I mean, they know! They know what copy works best, for pretty much everything. They can sort by clickthrough rate, revenue due to the purchase data they have, they have everything! You talk about SMB - they know every SMB! They know your margin and your COGS and whatever because they in aggregate they observe rational spending where all the cost is eaten by marketing; they know your potential market, etc. They know all this. They don't need to run tests. They can look at very recent, weeks old, historic data, and they have way more than enough samples to answer these questions to more or less the same degree of certainty and scientific rigor that any SMB doing it themselves as a first party can do.

I mean if they wanted to, they could run the A/B tests for you! Google could "just" serve a different web page with slightly altered copy. And see if more people "click" or "convert" or whatever. They have better technology, 1,000,000x more data... Why don't they do this? You wouldn't even need UX. It could just happen, you would check a box, and they would do this.

> fundamentally separates strings (copies) from code... and lets you update them effortlessly... The ability to make scientific edits without re-deployments and accelerating continuous iterations based on user feedback...

You keep talking about UX for developers and product managers. These are UX things. It doesn't actually matter. The existence or non-existence of what you're talking about doesn't correlate to higher or lower conversions, it isn't a scientific opinion on the practice of optimization, it is just a bunch of UX patterns to achieve it, but it could be achieved in many ways, perhaps with even better UX. Like in the example I gave, where Google "just" does this for you, which is the best UX because there is no UX, you don't need to separate strings from code, and you don't need to update them, because you don't need to do anything. Google could just do this. They own the channel, they see everything, they have the technology.

So why don't Google and Meta and Apple offer an automatic optimization product? You ought to have an opinion, it can't just be, "I don't know." I mean the sort of obvious answer is that "optimization doesn't really work" instead of "three paragraphs of bullshit."

> Curious why you think A/B test results are fictional? Getting stat sig results is probably the surest way to conclude results. Perhaps there's a different angle you are talking about here.

Well one reason I am very confident they are fictional is because the people who own the channels for a decade haven't offered a tool to do this.

I mean maybe they will. Maybe it was a technology problem, but I don't believe so. You could have Markov Chained your way through 5 word long taglines and whatever. They didn't need to way for generative AI to create valid test strings for people's websites. Indeed they could just let you copy the best performing taglines they see in their systems. Why. Don't. They?!

> Given the number of users that get exposed to every change a consumer company as large as Twitter makes

Another POV is that every change they made was bad. They thought they were a product organization, and they are really a backend engineering organization, where the best decisions are all based on first principles or executives' opinions, not on some unknowable measurement about audiences.


Do you happen to be hiring for customer support roles?


Why is all text lowercase on your website? I wouldn't want this in my product :)


Yeah it looks off, makes the website look rushed imho


Oh! That might tie into my other comment about how this would work even better if users were individually tracked!


So it would detect you for example as some sort of millennial, and use the punctuation that was passed down to you as correct and a signal of competency


Yes, we think of this as a decision tree, where initially, copies may look different by say, user demographics and location. As we learn more about what's working well for different dimensions of users (eg: topical interests, traffic source, platform type), the decision tree grows, and every single element in the anatomy of copies is optimized based on past learnings. In an ideal world, every user truly sees a unique copy tailored to them.


How do you do that in a meaningful statistically significant way?

Not a gotcha, just genuinely curious.


Another reason (branding) why copy would look different for different products :)


There is an extra space in your hero tagline between 'and' & 'no'. :D


You might want to proof-read your own copy - there's two spaces between "verisioning and" and "no code experimentation", and the "white-space: pre-wrap" setting makes it ugly and obvious.


A/B testing and the whole idea of "optimizing for user growth" should be dropped from our thinking. This is business and marketing driven thinking that often leads to things like the deplorable quality of social media feeds.

Rather we should optimize for user understanding or satisfaction - whichever fits. These are harder to capture but FAR more beneficial to the consumers of content.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: