Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Minimum Viable Secure Product (mvsp.dev)
173 points by jot on Aug 16, 2023 | hide | past | favorite | 84 comments


I’m a sec eng and worked at startups, I don’t love this list. Some of it is valid but generally it seems off target regarding how process-focused it is and ideas like startups doing data flows, vuln management and getting application-level logging done.

You’ll get much more bang for buck for the product’s security by working on the enterprise and infra side - deploying EDRs (somehow not on this list?), tightening up Gsuite sharing and email settings, Okta (I see SSO on the list), anti-phishing capability, guardduty and sechub, a pw manager, getting on IaC early to support ops and sec goals, and a lightweight SlackOps infra for security alerts. Somehow mostly none of this is mentioned.

The emphasis around the application seems misguided due to how (a) pragmatically it’s going to take much longer to get appsec controls and logs vs infra and enterprise controls/logs and (b) the vector in is usually Bob and Jodie in recruiting and HR getting phished, vs an appsec breach.

Also, it seems to break the golden rule of security enabling revenue with acceptable risk trade-offs and pragmatic controls. All the process in here doesn’t seem pragmatic. The controls in here seem useful overall but not where I’d go at first to secure a fast startup.


> You’ll get much more bang for buck for the product’s security by working on the enterprise and infra side

I am not a security engineer. But as a generalist software engineer I do not believe this. Every time I go looking in app code with an eye on security, I immediately find all kinds of vulnerabilities.

And yet, where I work, the security guys don't read or write a single line of app code. They do like you say, and focus almost entirely on the infra/business process side of things.

I find it really hard to respect these people when they are more worried about locking down what jira tickets I can view than they are actually securing our product.


My experience of IT Security in enterprises is that it's simply a checkbox to check off so that someone, somewhere, can say "I followed best practices; this breach cannot be blamed on me!"

Preventing breaches is an objective, but it's purely secondary to CYA. This is why the Security Strategy Policy exists and often looks so out of touch with reality:

1. An external consulting firm advised on that policy document.

2. Someone signed off on that policy document (CEO/CTO/COO/etc).

3. Everyone below that person ticked-off every single clause in that document.

4. Breach occurs.

5. Employees can't be blamed, they did everything that was asked of them

6. CEO/CTO/sign-off person cannot be blamed: they sought, paid-for, received and heeded professional advice.

7. The external firm cannot be blamed, as their advice is "in line with modern security guidelines."

In this sense, no one should be shifting focus from whatever the policy document says because then they can be blamed.


Security is of adversarial nature. Us vs them.

But the reason I find your comment interesting is that it turns that towards the inside.

Trivially the ones to blame are the attackers. But the responsibility for protecting against them is treated like a hot potato. It seems to be more important to say “It’s not our fault” than to prevent attacks.

Why is that?

I think a big reason is the interface of the technical side. People want to hear that it’s safe, but engineers can only really say “It’s safe against these particular things.” But it’s uncomfortable to hear that.


If the approach to a accident/catastrophe/breach is to find guilt and assign blame then there are strong and clear incentives to avoid blame.

In a lot of contexts this cannot be significantly changed by company culture as many legal systems adopt this approach.

The oft mentioned other approach is to assume that human and operational error cannot be eliminated[0] and design systems to withstands many compounding mistakes.

Afaik we do this only in aviation and infrastructure

[0] these two approaches are similar and live on a continuum between "we punished the guilty party, the problem is no more" and "let's make sure that this not any similar problem never happens again". Both extremes are often wrong, but in many cases it is a good idea to move towards the latter.


This rarely is the dynamic for startups. They’re not. Ringing in KPMG to advise on security lol.

The closest that’ll happen is sales RFPs everyone had to pass to sell to enterprise, which yes can be full of holes and checkboxes.


Locking down Jira tickets isn’t enterprise sec, so that’s not what I’m referring to.

This risks that happen that lead into bad exploits for startups and the controls available:

- SaaS vendor is breached and doesn’t tell you for a week: okta allows you to cleanly kill perks for that SaaS, and impacted users

- dev downloads a database access package on vscode, puts in prod creds, the package is calling home bc it’s surprise malware: EDR catches it

- dev has prod passwords on their MacBook note sheets, or accessing prod from a home computer: process enforcement, password manager, secrets vault

- customer service employee is phished, pivots into customer data: IR plans and email security tools

- product m API is “move fast and break things” and full of vulns: except for the worst of them, get a WAF/cloud flare to cover and be careful about public endpoints beyond that.

The above secures a product with limited bandwidth and a small security team, if you’re thinking defense in depth (which you should be).

It’s important for sec engs to code, no doubt. Two things to note that I think you’re missing, putting aside issues if your sec team is doing Jira ticket focuses vs the above.

- code vulns are paired with exploit paths. If a WAF is there, and the code doesn’t pivot into prod DBs and so on, these can wait to remediate.

- Have you tried remediating code vulns or getting application logs for security, at a product-focused startup? Devs are the worst enemy here. Anything touching the code is “slowing the product” usually. Security is a political job and this is an easy way to burn all your social capital if you chase each of vuln.

In short, you’re thinking about vulns, but not threat models. Startup sec program is all about smart thread modeling.


> entirely on the infra/business process side of things

Considering most of the major incidents over the last few years fall into these categories, it makes sense.


Exactly


True as it is, not a thing you list here addresses the security of the product. Except for IaCs. And the site describes securing the product.

Will edit with commentary in a moment.

Edit: alright, fair criticism re business security controls given that the checklist's first section is specifically about business controls.

And the rest should be focusing on automating those outcomes rather than just what the outcomes are. I agree that manually creating DFDs is just going to slow teams down, though it's definitely a capability that more security-mature or regulation-burdened organizations will need. When you're this early though, better to harden IaCs based on secure reference architectures up front and lint the heck out of code than to encumber all of ones processes with manual controls.


Read my above replies. Product gets secured by defense in depth on the enterprise side edit - plus say a WAF and being disciplined about public endpoints and dialing into how users Authnz.

Unless you have the unicorn dev who’s doing appsec also focusing largely on appsec early on is a losing battle politically, will get your 1x sec eng at the startup - who also has to do a ton of other stuff for the startup - to quit for a better job in 12 months, and so on.

Defense in depth and threat modeling needs to rule a security program for a startup and doing MSVP as your guideline would miss that nuance.


That's a valid point. We will address this in our upcoming work group meeting.

If you're working for a startup, we would greatly appreciate it if you could try implementing MVSP in your organization and let us know which controls were difficult or problematic to implement. These controls can potentially be considered for removal.


Hi there, I'm one of the authors of MVSP.

Thank you for your feedback! I've carefully read all the comments in this thread, and it seems there's a misconception that MVSP replaces or substitutes enterprise or compliance controls.

The goal of MVSP is to add the "S" in Minimum Viable Product (MVP). It establishes a minimum standard for a software artifact, like a SaaS app, to be considered for purchase by companies that use MVSP as an RFP baseline.

MVSP originated from the security contracts' language used by big corporations like Salesforce, Google, Dropbox, and others in the FAANG MANGA group.

If you believe that certain requirements could be removed, please feel free to join our mailing list and share your feedback for discussion: https://mvsp.groups.io/g/mvsp

Our next working group meeting in the US timezone is scheduled for September.


Where I think we’re not aligned is you’d secure the product to secure revenue, right? Passing RFPs makes total sense but adding MSVP into an RFP check adds another layer of checklists to go through, and if it’s for RFPs that means the checklist likely gets done prior to much else, so these checklists anchor your security program ealry on. Anchoring a startup security to the MSVP would be misguided IMO, for my reasons below. It doesn’t mean it’s not useful, but it’s not where I’d burn my social capital and budget as a sec eng to meaningfully secure a firm.

That’s the goal of running a security program for a startup, ultimately - get them to Series C without a serious breach, and then build out a security team to get them to IPO and through SOC2 etc.

So my concern is that if you have just one shot to standup a security program, and the goal is the above, building out a appsec program with the MSVP (which also skews into a bunch of process, beyond appsec) seems to veer far into stuff that’d be unproductive for startup security team to do.

Reason being is it hits what I said above in another thread - unless you have a dev knowledgeable and willing to do all the suggested appsec controls in the MSVP, that’s a sec eng asking for each of those controls. The odds of a clean devops pipeline even existing yet seems low, as to slot in SAST and some DAST approach to catch the vulns suggested by the MSVP. Or the budget for a pentest.

That’s a sec eng annoying startup devs for constant code changes for code that works and is getting revenue (bc this is how the 101 startup dev usually sees these requests).

That’s also a sec eng who is usually doing semi-GRC, supporting sales processes, doing all the above enterprise sec 101 stuff, and potentially ops/IT support, as there’s usually only 1 or 2 of these hires early on.

So, makes me think that the MSVP as written misses the pragmatic reality of what works for startup sec programs, although I’d agree it’d work for a product. But to secure the product given realities for a startup dev, WAF for the product endpoints helps deal with the worst appsec problems and then use defense in depth on the enterprise side. Phishing the customer service rep, SaaS breach, or sketchy packages are very commonly what gets startups.


EDRs and Phishing are largely for staff/enterprise side if things, not the product itself usually. Alhough people should put EDRs on product servers, for Linux at least (imho) auditd logs centrally logged, configures and monitored qualifies as minimally viable.


For a startup sec program, those delineations don’t exist which is my point.


Absolutely agree. Even basics like password manager instead of one password everywhere.

It is about the overall attitude. One cannot expect people to think about secure code, when they don't follow basic principles outside of the product itself.


When I visit the link I see:

> for enterprise-ready products and services

Are we seeing something different?


> anti-phishing capability,

Can you tell us more about that? What kind of capabilities can protect you against social engineering?


It’s a mix of training highest risk users and then layering a lot of defense in depth behind them from security tools offered by 0365 or (more commonly for a startup)GSuite, MFA everywhere early on (SMS would be fine very worst case for a lean/mean startup), EDR to catch DNS logging and if the phish got onto the endpoint successfully, and an idp like okta so you can quickly kill sessions for the breeches user. More advanced approaches are to sequester the most vulnerable users into something like Amazon workspace, where a successful phish goes nowhere. This can be harder to enforce though.


> EDR to catch DNS logging

What's an EDR? And what's the link between phishing and DNS?

> like okta so you can quickly kill sessions for the breeches user

I can the see the usefulness of the ability to kill compromises session, but if your user get his Okta account pwnd, then the attacker has access to the entirety of what the user had access to without needing to do any additional work, which is the worse possible scenario. And unless the user has the right reaction (seeking help ASAP), your kill switch isn't going to save you.

Also, MFA isn't really helping against phishing: the user is going to give the MFA code to the attacker anyway…


> Okta

While I agree Okta is the most feature complete why only Okta? Are there any other products that can replace Okta?


Azure Connect AD, JumpCloud, just to name a couple others. As an end user, I end up not caring, they're all about the same experience.


Azure AD can be a bit Microsoft-themed and startups and their SaaS providers who are startups half the time often are in the gsuite ecosystem. AAD probably works but okta is easy.

Jumpcloud got breached badly recently. Okta has as well but their breach wasn’t the same.

Generally with IDPs, worth going with a gold standard that works well. Okta is that


SAML IdP and password manager is basically equivalent (ideally add SCIM).

Okta is definitely the most complete, but plenty of other options.


Could you please suggest a site or list that's more programmatic?


The name is too cute, and actively misleads. You can't call this a "minimum" anything; it's an opinionated list of controls that several of the most savvy security teams I know don't uniformly implement.

Just off the top of my head, things that aren't even universally seen by practitioners as good things, let alone things everyone does as a "minimal" baseline:

* On request, enable your customers or their delegates to test the security of your application

* Implement role-specific security training for your personnel that is relevant to their business function

* Comply with all industry security standards relevant to your business such as PCI DSS, HITRUST, ISO27001, and SSAE 18 --- LOL to this whole line item.

* Maintain an up-to-date diagram indicating how sensitive data reaches your systems and where it ends up being stored

There is then a longer list of controls that I think most practitioners would say are good things, but that aren't always P1-prioritized (for good reason, to make way for more important things). CSP headers, SLSA level 1 builds, media sanitization policies; these things are situationally important.

I think an opinionated checklist is a fine thing, but when you call something the "minimal viable" standard, you set yourself up to explain how lots of well-run companies are viable without these things.


This is about as accurate in naming as a “minimal life list” that includes things like living debt free and having a $1M insurance policy.

Or closer to recommended practices. As these are definitely not minimal since there are many products that won’t meet them yet are still viable.

It’s also funny how cyber consultants seem more apt to add “required” thing that really tend to increase their bill hours. Like how accounting firms love Sarbanes Oxley and like to add in new rules that they claim are necessary.


What is Sarbanes Oxley and what’s it got to do with accounting firms?

Edit: court ruling (shortened SOX) has to do with security compliance for public companies. After Enron and other scams

https://en.m.wikipedia.org/wiki/Sarbanes%E2%80%93Oxley_Act


> several of the most savvy security teams

I mean the list isn't for savvy security teams. Or, I'd say, not even for companies with any kind of security team.

For the typical startup with zero people with any security experience, it's something they can do. If they get to a point of hiring a security team, they're past the point of needing such a starter list.

> Comply with all industry security standards relevant to your business such as PCI DSS, HITRUST, ISO27001, and SSAE 18 --- LOL to this whole line item.

That's silly to say. You must comply with regulations that apply to your industry. Try setting up a bank and just LOL at banking regulations (SBF?). Of course, if none apply to your industry, ignore them all.


> Comply with all industry security standards

Yeah this should just be the list. But then it wouldn’t be it’s own thing. It would just being SOC2 or NIST-800 certified.


> * Comply with all industry security standards relevant to your business such as PCI DSS, HITRUST, ISO27001, and SSAE 18 --- LOL to this whole line item.

I stopped reading at this point, completely disillusion for anything _minimally viable_ it's 2023 and I've still seen large companies with credentials and passwords committed into their code...


For "sensitive" information, as this is scoped to, one shouldn't LOL at the standards line.


Not a great list. Very opinionated, and much as I'm security-conscious, I disagree with a number of "minimum" recommendations there.

This list looks like it was written by people who are not paying for anything, so they don't understand the tradeoffs, costs and compromises. I can assure you things look quite different when you are a solo founder running the business and you have to pay for everything. Technical stuff doesn't change, so recommendations like "Do not limit the permitted characters that can be used in passwords" still hold, but "Contract a security vendor to perform annual, comprehensive penetration tests on your systems"?

Also, "Publish the point of contact for security reports on your website" and "Respond to security reports within a reasonable time frame" — I'm already spending too much time responding to "security researchers" trying to shake me by sending (rather silly and generic) "vulnerability reports". Some of them follow up by asking if they can "publish this on their social media channels" if I don't respond. I really don't need more "security reports" for my website.


> This list looks like it was written by people who are not paying for anything, so they don't understand the TRADEOFFS, costs and compromises.

That's the important word there (emphasis mine). I'm doing an MVP now for a client for a custom app (used internally by that client, i.e. only employers are users), and my recommendation for security is "I'll use a simple XOR symmetric stream cipher for now. We can switch to TLS[1] if the system turns out to be useful".

Yeah, cut corners on other things as well - they just want to see how viable this thing is in practice.

[1] While TLS is nice, certificate management becomes a pain in the rear for remote devices.


So really it’s a prototype. Nothing wrong with that.


I think a lot of comments here might be missing the point of this standard - it’s not specifically designed just for startups or to cover general organisation security (like EDR). It’s a product feature oriented checklist that should help short circuit some aspects of supplier security due diligence and establish a set of product security features that any B2B product with ambitions to sell to enterprise customers would benefit from aspiring to, and anyone buying can leverage. That means large multi-product businesses looking to ship an MVP for a new service as well startups with a bright idea but a possibly limited understanding of enterprise IT compliance needs.

I’ve got some experience as Head of Cyber/InfoSec at a couple of startups/scaleups and I see this as potentially useful in a bunch of ways if broadly adopted. It establishes a baseline both for our own products and our suppliers, attests to stuff that actually affects how we integrate with and operate a third party product (ISO 27001 gives me no clue whether you support SSO, have viable logs that I can actually ship to the SIEM etc.) and hopefully simplifies both the due diligence we do on our suppliers as well as that done on us by our customers by allowing us to reduce a good chunk of the product feature questions to “does your product meet the MVSP standard”.

There was a pretty useful discussion of it on the Google Cloud Security podcast a while ago: https://cloud.withgoogle.com/cloudsecurity/podcast/ep114-min...


As head of security, did you ever try to get devs to remediate a ton of appsec vulns during early days, or ask them to build data flow diagrams?

My criticism of this is that you can secure the product with solid defense in depth on the enterprise/infra side in a way that is:

- minimally invasive to devs

- aligns with what is likely there to slot into - what SAST tool can you pay for that early on at a startup, let alone slot into a non-existent DevOps pipeline. $1-200k SIEMs existing at a series A/B that prob has 1-2 sec engs?

- get a WAF for a bandaid fix for the appsec problems initially until you can hire product security folks


I think these are broadly reasonable points and when there is no existing security team (I’m joining to set one up) I usually start with going after infra, picking the low hanging fruit and carefully planning out any points of friction with a user base that isn’t used to having a security team. That’s often a journey that involves basic cloud controls, restructuring IAM, bringing in MDM, pushing for license upgrades for key SaaS apps that force you to pay the SSO tax, making sure the WAF and DoS protections are actually doing something useful, documenting the security signals we have and those we need, rolling out and drilling an actionable incident response plan etc. I came from software engineering into appsec, prodsec and then into security management and possibly as a result of that I don’t push for plugging a dependency/static analysis tool into a work item tracking system and fire hosing engineers with CVEs in code paths they never hit. I do want to see that they can keep up with general patching though - can they, on a regular (think weekly/monthly) cadence upgrade their libraries and frameworks to the latest patch version and still ship with quality that week?

But when you look at some key aspects of that infra/biz app security journey at a largely cloud based young business - like taking more control of IAM across SaaS, bringing together activity from those multiple cloud systems and creating alerts that actually matter - that all involves knowing your vendor supports SSO, possibly also SCIM, has logs that contain relevant data that can be exported/streamed into a SIEM. Some of the stuff this standard demands is going to be fairly core to securing business infra. I just get bored talking to vendors that want to wave an ISO cert at me as if that somehow makes it ok they don’t give me access to audit logs.


I’m broadly aligned with what your focus is on what to do first.

Your approach seems to self-evidently present the issue with this proposed framework.

If:

- the goal of a startup security program is to meaningfully protect revenue (two parts to this IMO: prevent catastrophic breach, and have a sec program that can pass RFPs and SOC2 eventually)

- RFPs can start early and often

and:

- for a small sec program, ideally you shoot for a very tight overlap of the above (tangible risk controls that helps sales)

- the goal of the MSVP is to slot into the RFP process to provide meaningful controls

Then:

- we’ve both independently laid out that a bulk of the work for early wins is on enterprise and infra, and it’s not worth hammering devs beyond the obvious (patches acceptably safe package management, no horrible API or authn/z risks)

- majority of what you’ve covered and I’ve covered aren’t on the proposed MSVP

So putting that together, it makes me feel that MSVP misses the mark vs its goals. The authors claim that it’s supposed to be more product related, but much of it is not product-related. These parts also largely don’t at all cover the obvious things you’ve laid out. And, the product-related parts get right into the dev’s space, such that you’d need a very security-committed dev or a very over-tasked sec eng to do it all. It seems to be a bad blend of not pragmatic for the realities of startup sec via not understanding how one can get defense in depth product wins without doing strictly “prodsec” and why that’s a solid approach.


I have been responsible for these controls in multiple startup and smaller businesses selling into Fortune 500, including setting up security "programs", and agree with everything you say.

Ultimately it boils down to:

> Convince whoever is asking you are following whatever process they are asking about so they can check a box.

They call it security theatre for a reason. Of course if you don't want to be lying, well, you gotta implement a lot of this stuff as at least written policies. Also, auditors have a tendency to ask for evidence...


> A minimum security baseline for enterprise-ready products and services

Sure, although minimum viable and enterprise-ready seems like an oxymoron to me.

Step one: define MVP.

Step two: add minimum enterprise security, minimum enterprise scalability, minimum enterprise legal compliance, minimum enterprise cost controls, minimum social responsibility ...

Step three: why the hell is MVP 28 months late?


A list of security recommendations about MV(S)Ps created by a consortium of pretty big non-startup companies. The joke tells itself.

But seriously, I think the problem is in the name. This is not a set of best practices for any kind of startup but rather a public checklist that anyone looking forward to provide some kind of service or product to any of the contributors (i.e. Google, Salesforce, Okta, or even other companies subscribing to this extreme SecOps cargo cult) must comply.


Minimum Viable Secure Product, Minimum Viable Marketable Product, Minimum Viable Sellable Product.

I don't get it. Why do we need all these additions? Perhaps it is me who does not understand the meaning of "Viable". MVP is not a restricted, well-defined end-goal. "Viable" means viable for your case. You can fill in whatever you want. It does not mean "Viable" without security or "Viable" without "Marketability".


> It does not mean "Viable" without security

But zero security is exactly what most startups see as viable.

I remember one PM at a startup saying "Well hire a marketing firm to create flowery language for our website asserting how ultra military grade secure we are. And that's it. We're not going to spend a penny implementing any of that security geekery you people talk about."


I was hoping for something more like:

* Check you're storing secrets safely and not in the code

* Run a automated security scanner to check for the OWASP top 10

* Confirm your endpoints have correct auth/permission checks, and that all debug flags are off

* Ensure databases are protected (eg. Not exposed to the internet or heavily restricted)

* Enable 2FA for everyone in the company and use a password manager

* Check what are the data protection laws and disclosure sla of your country


I like the idea, but I'd be skeptical that it's practical for very small companies.

That doesn't let them off the hook, but all that process overhead can be a killer.

I worked for a company that was PCI DSS, and it often made it impossible to get any work done (to be fair, it had more to do with how they implemented it, than the standard, itself).

But I agree that security needs to be Job One for everyone, regardless of size.


A bunch of this stuff you have to do anyway if you want to be GDPR compliant. It's unfortunately not as easy to launch a new service these days as it used to be. But maybe that's not a bad thing given how much data we are being asked to share nowadays.


While this may appear to be an altruistic consortium of security-minded companies trying to help startups do the right thing, my skeptical take is that it's primarily a way to drum up business, which explains why the contributors list largely consists of vendors that help you check these items off your list.

Google (one of the main sponsors) and other cloud-hosting vendors can essentially say, "Sure, you can cobble all of this together yourself -- or you can buy our services with much of this already baked in."


If there's any vendor in the sponsor list with an ulterior motive, it's Vanta.


They all sell products (via cloud offerings etc) that pertain to this list, why call out Vanta in particular?


I don't think Google covers a lot of this stuff.


Interesting, but I'm having trouble thinking of a startup that was killed or even harmed by a security issue (outside cryptocurrency stuff). Anecdata-wise, startup graveyard stories don't seem to have being-hacked as one of their failure causes either, unless I'm missing some big ones.

An old product attempt of mine was a threat model wizard that generated a simple deck with some very clear viz of the model and threat exposure addressed to all concerned parties - and then backlog issues and themes and epics for implementing or verifying the controls it implied. It reduced the checklist/risk assessment process to about an hour from the weeks of spreadsheet work that was a lot of peoples jobs, and it put security into the product/project dev process. Pretty much aha.io (the product management tool) but for security.

What I learned from it is a) self-assessments weren't valuable to large org customers (who need to be able to say they were told by a 3rd party, so it's not like product management that way) b) the need of small orgs / startups was a standard compliance checklist to make standard assertions to potential customers as part of vendor onboarding, and a faster/better model didn't get them that standard. c) most security people trade on their insights, and codifying their ontology even just to speed it up undermined their leverage in their orgs.

This checklist or something like it might actually meet the real needs of some startups who must make templated assertions to customers for vendor onboarding, but most startups would be lucky and probably pretty chuffed if they actually had something someone wanted to steal.


It was nice to see this kind of topic, reading the list felt a little higher level than I anticipated, even for a startup who is targeting enterprise sales.

I would love to hear what others here consider important to make a minimally viable and secure product in a startup from the starting as a startup.

How much of this list is hard needs, soft needs, beneficial, and nice to have preferences/interpretations?

Many startups wouldn't make it to the start line fulfilling this list. If I could pick a solution architect's brain looking at this, I'd be curious to know what ways there are to satisfy these through architecture, design considerations, or using particular parts of particular platforms?


If this is the "minimum", I'd love to see what they left off the list.

My minimum for a public-facing MVP:

- All services use HTTPS to talk to users and each other.

- High standard of password encryption (or prefer to use something like Cognito or Firebase).

- Plan for GDPR compliance (not exactly security related, but in the wheelhouse. The GDPR grifters come out of the woodwork quickly, so you need all the popups and account deletion stuff from day 1 if you are releasing to the EU).

- QA specifically for security: users can't access each other files, authentication controls work, etc.

- Don't store or handle credit cards - use a vendor like Stripe.

- Ensure all dev tools enforce 2FA where possible (GitHub, AWS, etc.).

- A basic backup system.

Then post-MVP, start working on the following:

- centralised logging.

- dependency patching plan.

- etc


> Don't store or handle credit cards - use a vendor like Stripe.

Alternatively, if your customers will make one-off or infrequent payments, you might want to consider accepting cheques, which can be made electronically through a system like BACS (note that every country has different systems available). Cheques (checks) are a 'customer pushes' method as opposed to cards, which are a 'seller pulls' method. Accepting international payments makes transfers somewhat more difficult, and cheques always assume a higher level of competence on the part of the customer than cards do; however, neither are likely to be a problem for B2B products.

> Ensure all dev tools enforce 2FA where possible (GitHub, AWS, etc.).

Two things that are critical to remember are that 1: most forms of 2FA don't improve security and 2: many services will refuse to restore accounts based on trust or some other form of evidence if 2FA was enabled, where they would otherwise.

Challenge-based forms of 2FA such as TOTP or FIDO are better than SMS, as latter can be intercepted in transit. Calculating the response to a challenge on a separate physical device to the one being authenticated is additional benefit that FIDO2 usually has. Side-note: if you need to authenticate customers, allow them to choose at least TOTP; try not to even provide SMS 2FA.

As for the lack of account recovery, this can be a benefit, but only if you have robust procedures to make sure employees' credentials (like TOTP codes) are copied to sites accessible by other employees; this effectively means things like buying fireproof safes if you are doing it properly. Revocation of credentials by other employees is just as important as recovery. All this is to say that 2FA is not something that you can just toggle a switch for to make your company secure; it is worthy of a company-wide strategy.


Totally arbitrary. Some of the advice is actually bad. The only critically relevant pieces of information are the password guidelines and HTTPS-related bullet points, for which there are actual authorities to read, not this waste of time and effort.


Isn't this just a ploy to get decision makers to think they need to use these services/platforms to achieve MVSP (and to think MVSP is a thing)? Man the internet really gentrified in a gross, banal way.


> Cross-site request forgery. Example: Accepting requests with an Origin header from a different domain

English is not my first language and I’m not a security expert, but this description seems a bit misleading.

The “Origin header” part should be left out. You don’t check where it’s comming from (you can’t know). You either send a unique token back and forth, or you defer the issue to the browser (cookies with Same Site strict/lax).

And it’s not “requests”, it’s “unsafe requests” AKA mutations (POST, PUT, PATCH, DELETE). It’s not just a nitpick. If your GETs are not safe, you might cause deeper issues.


Not really a fan of hijacking the MVP concept for something unrelated.


The intent of a the "minimum" in MVP is to produce something quickly in order to learn what resonates with people who might actually pay you money for something. It would be useful to have a "detect my disasterous security vulnerabilities" scanner with various language plugins for go, dart, swift, whatevs that I can run on the source of my MVP so that I don't have to waste time reading a checklist. Does that exist? Dunno.


Are you familiar with https://snyk.io/?

Disclaimer: I used to work with the founder, he’s great.


Not sure I can get behind the idea. This is like an oxymoron. I get the heightened need for security, but this is not the way. Security is a journey, not a checklist.


(2021)?

Some previous discussion: https://news.ycombinator.com/item?id=29100400


Better to ask forgiveness than permission. Extra security not worth reduction to sprint velocity ime.


What a brain deadness... "Maybe secure but must include HTTP.*"...


So Google wants it to be prohibitively costly to compete with Google. Got it lol


> Comply with all industry security standards relevant to your business such as PCI DSS, HITRUST, ISO27001, and SSAE 18 > Comply with local laws and regulations in jurisdictions applicable to your company and your customers, such as GDPR, Binding Corporate Rules, and Standard Contractual Clauses > Ensure data localization requirements are implemented in line with local regulations and contractual obligations

Whoever wrote this must be so irrationally out of touch with the startup space (or thinking of older billion dollar unicorn startup) to think that an MVP needs to do any of the above. I wouldn't care about GDPR until I have a somewhat strong EU userbase. Try to respect the spirit sure, but it's not like it will be enforced on a small business within it's first few years of existence.

Localization for an MVP is even more out of touch. Make an English US-centric version first (I'm not even American) put it out there and work on localization once you've had some success.


There's more to i18n/l10n than the language. Would you be happy seeing all dates in US-centric formats like 05-06-2023 for May 6th, not June 5th, and numbers like 1,245,678.90?


I think you misinterpreted what I meant. Yes I would prefer that my dates are in my preferred format, but if you are building an MVP, a Minimum Viable Product, spending precious time on getting localization implemented instead of your core features that deliver values to your customers is misguided. It would be like focusing on accessibility when you don't even have a single customer let alone a customer with a handicap.

Focus on value, everything else is fluff. When you have a few paying customers and that your product is now mostly proven then it's time to make it better.


> It would be like focusing on accessibility when you don't even have a single customer let alone a customer with a handicap.

So you're saying people people with accessibility needs are an afterthought, at best? Can you say a bit more about how you justify designing and shipping a product that only appeals to Americans who read English and happen to be perfect physical specimens?


> So you're saying people people with accessibility needs are an afterthought, at best?

This is such a comical strawman of what I wrote but I'll try to clarify nonetheless.

Say you have a startup that wants to disrupt inventory management, your secret sauce "idea" is a feature that allows you to cut stale inventory by 14% by implementing various just-in-time workflows that optimize shipping and reception of new goods.

If you are in the US (or most English-speaking countries but considering how combative you are let's say US), your MVP should be the most barebone system that you can ship that will give your customers that 14% reduction and make them want to sign a contract with you. Having a screen-reader-compatible interface might be a must ~1 year down the line when you reach a scale where conforming to ADA is a necessity or realistically when your customers ask for it because of the ADA.

It's not about caring or not, it's about assigning your meager startup resources towards the features that the prospective customer will say "yes we want your product". That's what an MVP is.

In addition (and perhaps even more importantly), MVPs are useful in determining your product-market fit. You might go over 5 or even 10 MVPs as you pivot and refine your inventory management idea, all without having a single customer keeping the lights on. You could implement accessibility in all of them, or you could implement it only in the version that actually gets adopted by a customer.


> the prospective customer

Yes, all this is just a roundabout way of viewing the prospective customer as a narrow slice of people, the ones who happen to conform to a specific set of ideas about what 'normal' means. If the prospective customer happens to be remarkably similar to the startup founder, perhaps it says more about the attitudes of what constitutes a viable market than it does about really developing a useful product. It reflects a certain homogeneity of ideas about 'prospective customer', and reinforces a homogeneity of options.


Not really though. If you are building a product for a market where a portion of your targeted user base will need that feature then suddenly your MVP needs to include those assessibility features from day 1 because it's part of the minimum set of features that make your product viable.

For example say your startup is targeting long-term homes as a market for a new app that help residents track activities within the facilities, your app will need to be accessible as a significant portion of your user base might have trouble with technology or difficulty reading small text on a smartphone. In that situation you need to have accessibility at the top of your checklist because it's part of the minimum to make your product viable. In many ways it might even be how you displace whatever the organizers were using previously, as you beat your competition by providing a better fit for their userbase.


Indeed, some of these things take months of work to complete, to expect a startup with a couple of people, working part time, to dedicate time to these is a startup death sentence.

And really, most of them don't provide security, they're a checklist. Checklists don't provide security, they provide (sort of) accountability.


Frankly, security should not be the top priority of a small startup, unless you deal with extremely sensitive data. I'm not sure it should make the top five. Off the top of my head, survival, product dev, growth, hiring and infra are all more important if you're just starting out


There are certain things that are very difficult to implement if you skip them at launch. For example, encryption of 3rd-party secrets. CircleCI is a good example of a successful company burning themselves badly by treating encryption as an afterthought.


Sure, but what if you get hacked, or defaced, or your client info gets out?

It could kill you too. It's a balance.


> I wouldn't care about GDPR until I have a somewhat strong EU userbase. Try to respect the spirit sure, but it's not like it will be enforced on a small business within its first few years of existence.

This seems ethically wrong -

1. Even if your user base is not “strong” the privacy of the users you do have still matters

and

2. It should be about protecting them not just calculating enforcement risk like a Ford Pinto engineer.


I will NOT comply with authoritarian demands imposed on me by the EU. My users, MY data. I clearly state how and why I share user data with brokers and users choose of their own volition to use my services. This is NOT unethical.


Even when accepting users' choice to submit personal data as a justification for retaining the data (which is illegal in more than just the EU, by the way), you may still receive personal data about someone which was not submitted by them voluntarily. This can happen in situations ranging from the normal course of business to customers actively attempting to use your product illegally.

You have a duty to deal with a situation like that, and since the GDPR already makes provision for third-party processing of personal data, there is little reason not to go one small step further and extend your process for personal data deletion to users directly.


Maybe it is ethically wrong to launch in EU without complying with GDPR. But if that is the case, then launch somewhere else first and build towards a more secure product in the meantime while you raise money and gather more insights. Don't forget it is a MVP ffs, and that you can be completely open about it (i.e. just say it is in beta or early access phase).


The checklist doesn't render correctly on mobile.


FedRAMP High is a good minimum viable security




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: