Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Launch HN: Bracket (YC W22) – Two-Way Sync Between Salesforce and Postgres
92 points by ianyanusko on Dec 12, 2023 | hide | past | favorite | 77 comments
Hey HN, I’m Ian, co-founder at Bracket (https://www.usebracket.com) along with Kunal and Vinesh. Bracket makes it easy to set up fast, bidirectional syncs between Salesforce and Postgres.

We have two main use cases: 1) building apps on top of Postgres instead of Salesforce, and 2) replacing existing Salesforce <> Postgres syncs (either Heroku Connect or in-house builds).

Postgres makes a bunch of things easy for developers: building responsive web apps, handling SSO and user access rules, and scaling large datasets like time-series data. But Sales, Customer Success, and Ops teams usually don’t have direct access to Postgres - instead, they rely on Salesforce as a type of database. These teams need up-to-date data on users, orders, and services, and they need to edit this data. As a result, in a lot of organizations there’s a sort of abyss between Postgres and Salesforce.

For example: say you're a car rental company. People rent cars via your web app built on Postgres, but your CX team uses Salesforce to track/update users, cars, and rentals. One of your users calls in to say that they were in an accident and the car is totaled. Your CX team (on the Salesforce side of the abyss) needs to manually update the status of the car - "Unavailable" - and reassign upcoming reservations to other cars. These edits made in Salesforce must sync to Postgres so that the user sees their updated reservation details.

We first came across this syncing problem when we were deep in pivot hell during YC W22. At the time, we were two weeks away from Demo Day and we had been pivoting for five weeks. We felt like failures every morning, and it seemed inevitable that we’d drop out. Then we talked to a founder who told us how hard it is to simply keep Airtable and MongoDB in sync with each other. He tried Zapier, he tried writing custom scripts, all to no avail. At last, we had 1) a technical problem 2) frustrating a smart founder 3) with a big potential market. We started to build, raised a conservative amount of money at Demo Day, and kept our burn extremely low.

After a year, we had a product keeping Airtable, Notion, and Google Sheet tables in sync with larger databases, but it still felt like a stop-gap: companies were often using us to stand up lightweight BI, avoid creating internal tools from scratch, or build quick admin dashboards. Once they felt the limitations of, say, Google Sheets, they moved off Bracket to a more permanent solution. Not only did this shrink the size of the market, but also left us feeling like we were creating a vitamin, not a painkiller.

Then we talked with companies who wanted to sync CRMs - specifically Salesforce - with Postgres. They either had high-maintenance in-house solutions for syncing, or they were using Heroku Connect, which enables two-way syncs between Salesforce and Heroku-hosted Postgres. They couldn’t get rid of Salesforce and they couldn’t allow the two systems to get out of sync, so they were stuck with Heroku Connect.

There are two major problems with Heroku Connect, though: 1) it's super expensive, and 2) it ties you to Heroku Enterprise as a hosting platform. These companies wanted something as reliable as Heroku Connect, but hosting-agnostic and priced competitively. We sensed an opportunity to build something useful here, so we got to it.

Bracket makes it easy (90 seconds of setup) to get a Salesforce object and Postgres table syncing with each other in near-real-time. Using our app, you connect your Salesforce via oauth, connect your Postgres via connection URI (with options for SSL protocols), and either have Bracket generate a Postgres table from scratch or map fields between Salesforce and an existing Postgres table.

Once connected, Bracket can sync two ways or one way at a cadence decided by the user. We offer two sync methods: polling (default) and streaming. Using the polling sync method, changes sync on average every 30-60 seconds. Using streaming, changes sync on average every 10-30 seconds. You can read about how each method works, and the APIs they use, here: https://docs.usebracket.com/polling

We offer a few monthly subscription plans based on the amount of data kept in sync, with a free starter plan. You can try us without a credit card at https://app.usebracket.com/. If you don’t have a Salesforce or Postgres already set up, you can see Bracket in action here: https://www.youtube.com/watch?v=sRkaAa667T0

We’re hoping to build the best two-way syncing tool possible. We’ve got tools like Hubspot and MySQL in beta, and we’d love your feedback on other integrations that would be useful, product experience, and anything else that comes to your mind. It’s all very much appreciated. Thank you!



We've built similar thing to sync certain Salesforce objects to our .NET backend but Salesforce has their streaming all built around CometD which uses something I believe an insane person came up with - Bayeux Protocol. It's essentially an HTTP streaming, you shoot a GET call to an API and it starts streaming bytes of data. If it stops or timeouts, you're supposed to shoot another GET to continue streaming bytes, if nothing comes back you timeout and retry.

It's an absolute hell, most community frameworks barely work, there's zero good solutions to error handling, we had our Azure Application Insights constantly red because of timed-out requests that are in fact good, because they just mean there were no events to transmit.

I refuse to believe that multi-billion dollar company that is Salesforce couldn't come up with a RabbitMQ sink or virtually any better solution to that problem, especially that they could gate it behind a subscription most companies would pay for since they spend millions on SF either way...


Wow congrats on making it through with CometD and Bayeux. I think salesforce has realized that a lot of their APIs aren’t useful when using non standard approaches. I think the move from SOAP to also including REST was a signal that they’re trying to be more useful in this realm. I definitely agree a RabbitMQ sink would've been so ideal! I’m hoping it’s somewhere on their roadmap to make our lives easier.


I don't have any first hand experience but there is an integration with AWS EventBridge.

https://aws.amazon.com/blogs/compute/building-salesforce-int...

I know even less about Azure, but it looks like Azure Data Factory (?) provides some kind of similar functionality?

https://learn.microsoft.com/en-us/azure/data-factory/connect...


From memory, one reasonably good solution I saw was with a company called Validic that did IoT integration stuff for medical devices. They got you to set up an HTTP SSE stream, that you just connected to and consumed from as and when events came through.


I haven’t heard of Valadic, but I’ll definitely check it out. Thanks for flagging!


I tried to build this a few years ago using Salesforce’s webhooks and discovered that low-value tenants are on shared infrastructure where things like scheduled jobs can run instantly… or 10 minutes later, and Salesforce make no guarantees about when things will happen — making any attempt to use Salesforce as a source of truth unreliable.

The polling solution is neat but I am imagining it’ll run into issues with API limits and performance, especially for tenants on shared infrastructure: have you encountered that? Are you limited to working with customers that are paying Salesforce enough to have fast/reliable infrastructure?

(I don’t think it’s a problem for your success as your product is most valuable to those paying Salesforce for reliability, I’m just curious how you are thinking about the problems I gave up on! Maybe things with Salesforce have changed in the last 5 years.)


That's awesome that you set this up, thanks for paving the way! Our users who want real-time syncs and are watchful of their REST API limits typically opt in for our streaming solution (I believe the pub/sub api was a recent addition to their change data capturing APIs). Too much polling definitely has its issues with limits as you mentioned, but we allow our users to set their own frequency to meet their needs.

If I'm understanding correctly, the scheduled jobs refer to the Bulk API (I agree it executes at seemingly random speeds). We only use the bulk api on the initial “seed”, where we write a large amount of data from salesforce to postgres. Otherwise, when it comes to reading/writing data, we stick to the REST API, which we’ve found pretty performant and which Heroku Connect seems to rely on, too: https://devcenter.heroku.com/articles/mapping-configuration-...

> Are you limited to working with customers that are paying Salesforce enough…

Yeah, right now we do require that users are on Salesforce plans that include API access, which are Performance, Developer, Unlimited, and Enterprise (or Professional w/ API add-ons).


I'm guessing the GP's scheduled jobs are running within Salesforce, probably Apex. I'd note that I've seen inconsistent async processing delays even in EE and UE clients. First of all, I'm pretty sure everyone is on shared infrastructure, and second, the delay is at least in part relative to the amount of recent processing.


Yeah, so far, we’ve found that this combination of the three APIs is a happy medium between reliability, simplicity, and API limit consciousness.


I'm curious what's the biggest table you've tested on? Things start to get really interesting in Salesforce when your object has a million rows.


Yeah definitely agree, we had to rework our logic for these cases. We've worked with ~4 Million row objects. In general with our polling strategy syncs, the problem was more the size of each record. So a table with 1M rows but 400 fields was way more problematic than 2M of just 5 fields.


Salesforce reps don't even know what Heroku Connect is. I couldn't get someone to sell it to me! I think the idea is great, glad to see someone picking it up.


Yeah agreed! We were surprised to hear how deeply bundled Heroku Connect is in their Enterprise plan. Reps not knowing about heroku connect coupled with their abandoned roadmap make us excited to build a better solution


I know this is a late comment, but I have to ask, what do you think the market opportunity is on this?

500 million? Billion? Billion plus?

I'm curious how big a business like this can grow, by estimation, based on someone else's platform like this.


Good question. Our ultimate goal is to be the go-to for any two-way data syncs.

We think the Salesforce syncing market by itself is 500 million to 1B. If you start including all of the tools we have our eyes on (Hubspot, Monday, ERPs), the market size comfortably gets into the billions.

Going after Heroku Connect makes sense as a starting point, but we've got our sights beyond that.


Yeah, I used it quite successfully for a local nonprofit that kept most of their data in Salesforce. The eventually consistent syncing sidestepped a ton of headache for us and we could read straight from PostgreSQL. We just used API calls then writes to PostgreSQL for important changes (registrations, etc) originating from the website.


As a developer I love the idea. Choosing Postgres over say a GraphQL API is bold, but it makes sense. The customer can then scale up their Postgres instance as they hammer it more and more, and they might already being using this for their app, so developers can add it to their ORM. Or they could wrap this in a microservice.

I also liked the submission recently about a git client that uses SQL. I like the idea that more things can be exposed as SQL (either directly or by syncing with a RDBMS). There is a lot of good tooling around it and despite the S not meaning Standard, the dialects are close enough that it isn't a problem.


Agreed! We’re big fans of consolidating in Postgres. We’re also hearing some downstream benefits to having things scaled in Postgres (makes reporting easier, allows the data team to use SQL rather than hitting an API, etc) from our users.


This is very cool! I built a similar tool for a project with the need for two-way data syncing between Salesforce and MySQL via Laravel.

Salesforce objects map quite nicely onto Laravel’s Eloquent models (and booleans work fine! ref @ianyanusko in a sibling thread)

On the Salesforce side we use triggers to send a summary of field changes to Laravel to apply to its MySQL database. These are very cheap in terms of Salesforce limits and consumption!

Changes originating from the Laravel side use Salesforce’s REST API. It’s handy taking the round trip through Salesforce when saving changes as it lets flows/processes run and formulas to do their thing before data is persisted in MySQL.

Syncing data from Salesforce (to seed a database for example) is done via REST too. It works OK.

I considered productising it at one point, but ideas are a dime a dozen, it’s a lot harder to execute well and Bracket is doing exactly that, kudos!

This has been a thoroughly interesting post, and I’ll keep my eye on Bracket. You are however out of budget for my client, we had to achieve syncing on a shoestring ;)


Nice! Great point about the round trip - we do something similar for formulas and auto-generated fields like `Id`. That's awesome you built this in-house.

> Syncing data from Salesforce (to seed a database for example) is done via REST too. It works OK.

Have you thought about using the Bulk API for seeding? We started relying on that instead of REST, which helped us seed massive DBs much faster / more efficiently.


REST isn't too painful yet, but it will be in the not too distant future.

As you know, the REST API will deliver a maximum of 2000 records per request, so beyond a certain scale it's not really tenable in terms of speed & consumption of API calls.

So yes, Bulk API is probably going to feature soon.

Cheers Ian!


Nice i can see the need for this.

> With the polling method, Bracket stores an encrypted copy of your data as an intermediate source of truth. We mostly do this to prevent infinite event loops, but it also helps with merge conflict resolution.

I see you have an on-prem version, but i am still not convinced why you need to store it? Can it not just be stored in an extra table at the client's side?


Yes, you’re correct on that! Sorry for the lack of clarity on the docs. The snapshots would be stored in a client side table in these cases. These snapshot tables/collections can either be stored in your own PG or MongoDB.


Neat! How specific is your solution to Postgres? Could it be ported to another db engine?

(And, how are conflicts resolved? In a huge system with millions of records coming from everywhere it can fast become nightmarish?)


> How specific is your solution to Postgres? Could it be ported to another db engine?

Our polling approach is relatively database-agnostic. We just need to handle each DB's quirks with our transformers (e.g. dealing with MySQL's lack of BOOL field types).

Streaming is currently Postgres-specific. We're planning on rolling out support for MySQL next, after we've finished our Hubspot integration. Do you have a specific DB in mind?

> (And, how are conflicts resolved? In a huge system with millions of records coming from everywhere it can fast become nightmarish?)

The primary source wins any merge conflicts that happen within a sync period. With polling, it's pretty straightforward: at every poll, we see how each side has changed, and for any record pairings for which there were edits on both sides, we prefer the primary source.

With streaming, we employ a hybrid method, where we only poll when events occur in either Salesforce or Postgres. If at that poll, the same record has been edited on both sides since the previous poll, we still prioritize the primary source (Salesforce). You can read the step-by-step flow here: https://docs.usebracket.com/streaming#the-streaming-sync-met...


> The primary source wins any merge conflicts that happen within a sync period.

This is a very fancy way of saying that you just drop conflict and pretend they didn't happen. Syncing databases is very, very tricky. Conflicts are a big part of the trickiness.


Agreed on the trickiness! Our early users largely told us they preferred one source to take precedence in a conflict, and would rather set that general rule than review every conflict manually. But a handful have expressed interest in the latter approach, so it's on our roadmap to build.


This looks great! The UI and docs look very nice. Of course long-term reliability is what really matters in this space. I can definitely see incorporating this into client proposals.

A few things I can't immediately see from the docs: do you support subsetting a data source -- only sync records matching criteria? Do you support to/from different instances of the same connector (e.g. Salesforce to Salesforce)? Can you perform any transformations like map over the data, normalize or denormalize tables, etc?

Many clients I can think of this being most useful for would rather host it themselves, is that an option?

One critique: I can't imagine recommending this to a client without SSL support. I'd highly recommend just baking that in to every tier. It would demonstrate that you're serious about keeping your customers' data secure.


Thank you!

> do you support subsetting a data source -- only sync records matching criteria?

In a one-way sync from Postgres to Salesforce, yes, you can apply filters using a SQL statement, but we’re working on adding this to the Salesforce side as well as two-way syncs. From your perspective, how would you want to set these filters? A SOQL query, or something else?

> Do you support to/from different instances of the same connector (e.g. Salesforce to Salesforce)?

Yup, our infra is data-source agnostic! But Salesforce <> Salesforce is not heavily tested, so there may be some funky behavior with oauth tokens if you’re trying to connect objects from two different instances during onboarding. Curious what use case you have in mind?

> Can you perform any transformations like map over the data, normalize or denormalize tables, etc?

Besides the one-way SQL filter I mention above, we try to make field mapping easy between the sources by automatically transforming when necessary (e.g., transforming a Salesforce picklist to a Postgres integer and vice versa). But we’re working on allowing users to create more detailed field-level transformations in the next few months.

Thanks for the feedback! Totally hear you on SSL, we’ll move that to every tier.


> From your perspective, how would you want to set these filters? A SOQL query, or something else?

For myself, I want to write SOQL there, though I'd guess many of your target customers will want a point-and-click option. Selecting a list view for that object could be an interesting UX hack that might be worth exploring: there's some Salesforce Labs product that does that.

> Salesforce <> Salesforce is not heavily tested... Curious what use case you have in mind?

I've seen a number of configurations for different purposes, the most common one being sandbox data movements.

Management of multiple production orgs gets complex fast, but there are a few places where a tool like this could find a niche: org migrations come to mind. There's often an interim period where you're two way syncing (even though you'd rather not!) before the org being phased out is done.

Good luck!


Really helpful on both fronts, thanks for the thoughts. Both the sandbox and migration use cases make a lot of sense.


Sorry, forgot to respond to one piece:

> Many clients I can think of this being most useful for would rather host it themselves, is that an option?

Right now you can self-host the associated datasets (like the Postgres event log table), but we're still working on allowing you to self-host the entire service. Stay tuned :)


Neat. I once wrote a Salesforce to Oracle sync app. I think the goal was to bulk pull data out of Salesforce for processing in Oracle to avoid Salesforce API costs. This was years ago. It was a fun and aggravating project.


Nice, I hear you on "fun and aggravating" :)

Sounds similar to the use cases we're seeing, where it's not only easier to process/build on Postgres, but also saves you on the Salesforce API.


I'm currently on a team tasked with thinning out our Salesforce App Exchange app. Bracket looks very cool.

1. Do you have any plans to release an API rather than utilizing a webapp for defining the sync? 2. Does your Salesforce integration support syncing metadata, including custom metadata? 3. Do you have data on the impact Bracket has on platform events and other Salesforce limits? 4. Can you share any information on pricing?

Thanks!


1. Yes, on our roadmap for Q1! Getting that request a lot 2. We don't currently sync metadata 3. Our footprint on your Salesforce API depends on whether you're using polling or streaming, and then it depends on the cadence of your syncs or frequency of changes. You can see some data on best/worst case scenarios here: https://docs.usebracket.com/connecting/salesforce_api 4. Yup, we priced based on the amount of data kept in sync. You can see more here: https://www.usebracket.com/pricing


Great, thank you! Syncing metadata may be a dealbreaker for me, but I'll be thinking of Bracket as we build solutions going forward. Cheers.


As the founder of what would become Heroku Connect (aka Cloudconnect), this thread warms my heart. Love to see the innovation - good luck!


Thank you! It means a lot to see your comment here


Even if this only accomplished one-way directional sync with backup and disaster recovery capabilities, it'd probably find a 10x wider audience.


Thanks for the comment - are you saying that one-way sync has a 10x wider audience than two-way syncs?


Without knowing the target market, I'm thinking the taxonomy of "two-way sync" is understood by a very narrow community. ISVs have been using this approach for years. So maybe that's the audience?

If so, then Heroku connect probably does need an alternative.

A business user (non-technical 10x audience) just wants to sleep at night knowing their daily backup was successful and they have recovery capabilities on-demand.

There are unicorn scale companies in this space (such as https://www.owndata.com/). And Salesforce is getting back into the backup game.


Gotcha, that makes sense. From our perspective, the one-way sync market (backups and ETL/rETL) was already saturated with heavily-capitalized players who were hitting each other hard with marketing dollars. We didn't feel that we had a unique insight with one-way syncs, but did feel like we'd come across something unique with two-way use cases.

So we decided to double down in this smaller, underserved area for now as we try to build something people love, even if it's a bit niche.

That being said, there probably is room in the backup space for challengers against the incumbents - it's a massive market.


I really admire the dedication and motivation to be able to work on such incredibly boring things (for me).


Thank you!


Looking forward to the HubSpot + MySQL integration.

Disclosure: I'm one of the HubSpot founders.


Hubspot and MySQL are both next on our list for deploying out of beta! I'll shoot you a message when they're out :)


I wish you had built this in 2021 but congrats!!

Message me, might have a use-case with a Fortune 100.


Thanks! Going to shoot you a note


Doesn't fivetran/hightouch do this?


Fivetran and Hightouch do ELT and reverse ETL, respectively, which are one-way syncs. While Bracket can handle one-way syncs, two-way syncs (in situations where you need writes from both Postgres and Salesforce to sync) are our bread and butter.

We’ve worked with some folks who were cobbling together ELT and rETL to achieve a type of two-way sync, but they faced three problems with this approach: most importantly, handling for infinite loops, where an insert in one data source triggers an insert in the other, which then gets picked up anew, etc. Second, it wasn’t clear how to handle merge conflicts when the same record is edited from both sides (we give priority to the primary source). Finally, it was frustrating to debug problems when having to log into two separate tools.


Nice. The two-way syncs are where all the complexity lies, and why people end up paying the big bucks to like Mulesoft.


For sure, definitely has not been an easy problem to solve. The two-way syncing is one tough element (for the reasons mentioned above), and configuring our requests to the Salesforce API in a way that minimizes our footprint (while still keeping syncs fast) has been another tricky task.


I'l also give a shout-out to Airbyte (https://airbyte.com/), with which I've had some limited success with integrating Salesforce to a local database. The particular pull for Airbyte is that we can self-host the open source version, rather than pay Fivetran a significant sum to do this for us.

It's an immature tool, so I don't yet know that I can claim we've spent _less_ than Fivetran on the additional engineering and ops time, but it feels like it has potential to do so once stabilized.


I don't know much about Bracket, but looking through their docs[1] it looks like they support streaming from Salesforce as an alternative to batch/scheduled jobs (which is what Fivetran does)

[1] https://docs.usebracket.com/streaming


Yup! We offer two ways of interacting with Salesforce: polling and streaming. The benefits of streaming are faster syncs and optimized Salesforce API usage.

Right now, we're focusing on use cases that require changes to be synced within a minute. Fivetran and its ilk are generally more focused on large data batches at less frequent intervals (e.g., to unlock 1-minute syncs at Fivetran, you must be on the Enterprise plan, whereas we offer that out of the gate with the Starter plan).


We use Fivetran for big data sets but less frequent one-way syncs as you reference, and it is great but VERY expensive.


Yea, but it's a pain to have multiple ETLs when one tool would do the job.

I've always been surprised that fivetran/hightouch don't ever throw in a .reverse().


Yeah exactly. One-way data syncing is a big market, and these companies have plenty of use cases to tackle, which is why I think two-way syncing has not been a priority for them.

That’s one reason why people trying to stand up two-way syncs have been stuck with dated tech (Heroku Connect) or have had to build in-house.


Congratulations on not giving up and solving a boring but hard problem.

What's the tech stack?


Thank you! We’re primarily a TypeScript team with Next/Nest frameworks on GCP, using serverless where we can


Get in touch if you want to expand to NetSuite. We can have a chat...


We (Breadwinner) offer a live sync of NetSuite ERP data into Salesforce using prebuilt objects (and can handle custom objects). It's called "Breadwinner for NetSuite".

We also have a NetSuite to Postgres sync that's currently in Alpha.

For those of you considering an integration to NetSuite, keep in mind that NetSuite has both SOAP and REST connections, but (as of today) neither works with every single object. So anything you build will likely have to leverage both APIs. Pagination can also take 30-60 seconds per page retrieval, so that's an added complication for large data sets.

(Yes, you could possibly leverage NetSuite Restlets, but then you have custom code on each and every customer, hence we avoid that).

If anyone wants help on building an integration with NetSuite, happy to see if I can help. My email is in my about.


If one-way sync will do, we (Estuary) have been investing in a NetSuite connector that we've _just_ started testing with a few customers. Both SuiteAnalytics and SuiteQL with full support for custom tables & schema.

[1]: https://docs.estuary.dev/reference/Connectors/capture-connec...


I built an in-house one-way sync from NS to Google BigQuery. Two-way is pretty hard as their REST API is still “in beta”.. and nobody wants to deal with SOAP.. how big of an opportunity do you think this is?


Very curious what you have in mind. My email is in my bio if you're open to sharing.


Love the focus. How did you find the people to talk to?


Thanks! We've had to internalize the lesson "startups can do one thing well at a time" :)

At first, it was a matter of doing discovery with people who we thought would find Bracket useful. These weren't sales convos, but pure feedback/discovery. From that process, we got a good sense of which roles face these pain points, and exactly what those pain points are.

Since then we've split our time between 1) writing content/distributing it in places people would find it helpful and 2) reaching out directly to people who would use Bracket. In the latter case, we really don't like generic sales campaigns, so we spend a good bit of time crafting each message.


Is bracket using Postgres CDC (via WAL)?


We use a custom trigger to build an event log in Postgres. We actually found it more straightforward to use this approach than to use the WAL packages we considered.

If you're curious, you can read more about our approach to it and the actual SQL we use here: https://docs.usebracket.com/streaming#postgres


most difficult part building this?


With Salesforce specifically, getting a stable and reliable event-driven stream. That's why we ended up adopting a hybrid approach of subscribing to pub/sub events, but also periodically polling the REST API in case pub/sub misses anything.

Salesforce's scattered docs don't make best practices super clear, so designing the syncs took us some time.


Meta observation: lots of YC companies basically build their startups around some workload (e.g. salesforce <-> db) and sell some managed service around it. This game is quite difficult to scale and sell to enterprise. But it's understandable that many technical founders are most familiar with these types of "painpoints" and thus have their startup thesis around them.


It is incredibly hard and a giant overhead on each of these SaaS startups to do marketing and enterprise sales which shows up in pricing (30-70%!) These solutions still need to be vendored in and integrated at each client.

Rolling up all the YC single-feature SaaS business into a single sales org and providing an a la carte set of solutions to partners would make a ton of sense.

I guess that is the theory behind SaaS.group and its ilk.


Agreed that many technical founders build on the pain points they've experienced themselves. I'm curious your take on why the managed services approach is hard to scale?

IMO, the reason many YC companies struggle with enterprise sales is because it's generally much harder than PLG, with long sales cycles and delayed feedback loops. It was definitely a learning curve for us!


(opinions as ex-AWS GTM specialist for AI)

An underrated challenge with enterprise sales is often that customers often misguidedly believe that 1) they can build it themselves, 2) they are worried about your companies ability to "scale" to their needs, and/or 3) the customer wants to "own" that responsibility for personal/political/career reasons.

Sales is hard for everyone including Azure, AWS, etc, just gotta keep grinding at it.


Opportunity for YC (if they don't already) to maybe even have pools of fractional VPSales and sales people who can help startups through advise or taking a role, to transition to enterprise sales.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: