#5 has a converse - oftentimes, the only way to get a rebuild to succeed is to drop features, and it's a major red flag if management insists on 100% feature parity.
The way to distinguish this from the #5 situation in the article is to ask if you're dropping features because they're hard or because nobody uses them. The former is a red flag; the latter is a green flag. Before you embark on a rebuild, you should have solid data (ideally backed up by logs) about which features your users are using, which ones they care about, which ones are "nice to haves", which ones were very necessary to get to the stage you're at now but have lost their importance in the current business environment, and which ones were outright mistakes. And you should be able to identify at least half a dozen features in the last 3 categories that you can commit to cutting. Otherwise it's likely that the rewrite will contain all the complexity of the original system, but without the institutional knowledge built up on how to manage that complexity.
> Before you embark on a rebuild, you should have solid data (ideally backed up by logs) about which features your users are using, which ones they care about, which ones are "nice to haves", which ones were very necessary to get to the stage you're at now but have lost their importance in the current business environment, and which ones were outright mistakes.
This is so important. I've been on many a project where, 3 months in, we wish we had historical tracking data on user activity to back up our instincts to cut a particular feature that seems worthless. The worst part? Even if you add it immediately, you'll have to wait 2-4 weeks to get a sufficient amount of data.
Also important to realise that a feature that is rarely used (view history, remove user) might be more important than one used more often (dashboard widget that nobody pays attention to)
Yup; statistics are only part of the picture and value of a story. Compliancy is another one for example; sure, few people will use the 'download all my data' and 'delete my account' options, but they're mandatory for GDPR compliance and not offering them may cause a huge fine. There's a lot of these compliancy features.
> The worst part? Even if you add it immediately, you'll have to wait 2-4 weeks to get a sufficient amount of data.
I think this was the problem a product like Heap [1] was designed to solve: just track all user actions, forever, and then assign pipelines after the fact based on what you want to check up on.
Don't work at Heap or anything, just love the team and product.
We need case law to settle the matter but in general, the GDPR indicates that if you don't need to collect the data in order to perform the requested activity, you need explicit consent for collecting it, and will be held to a high standard in court if this every comes in to question.
Yes, but like the "cookie law" before it, it's absolutely fine to go ahead and do it if it's required (in the case of something like logging aggregate usage counts of APIs, that's easy to justify as a requirement for maintaining a reliable service; it's basic server monitoring).
Things like online stores using cookies to track a user's shopping cart across requests are completely fine, yet it seems like legal departments decided to be overly cautious and treat all cookies as potentially infringing. GDPR may be triggering similar reactions.
I wouldn't have a problem with that if marketing departments became equally cautious, but they seem to just slap on a banner and carry on as before :(
> if you don't need to collect the data in order to perform the requested activity
It's about data that can identify a user, not any data. A collection of actions with anonymized user IDs will not allow to identify the user (in most cases), so it's fine to keep it.
GDPR, I'm hoping that I don't have to bother my users with a "do you consent to" popup when the only thing I want to do is to log server-side the API calls so that I can see patterns in usage and such. If I were to show such a "do you consent to" popup users might mistakenly think I'm one of those techcrunchers with hundreds of data partners that all get to see your PII. I do not want to affiliate myself with those type of actors.
"The principles of data protection should therefore not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable. This Regulation does not therefore concern the processing of such anonymous information, including for statistical or research purposes."
As long as it's not linked to a particular profile ("pseudonymous" doesn't count, it could still be linked), it's fine.
One thing to be careful not to fall afoul of when you choose to remove features is assuming there is some kind of meaningful average user.
A good example is MS Office, there are an huge amount of features that only 5% of users might ever use, but the majority of users are likely to use quite a few of these niches individually, and if you remove all the low use features, you piss off basicly everyone.
I think the mistaken idea of an average user is why a lot of metrics driven software seems to get more and more useless with every update.
(I cant see the present/away status of contacts in the newest skype, really guys? )
> And you should be able to identify at least half a dozen features in the last 3 categories that you can commit to cutting.
Ideally, you disable them in the old software, and observe how many people complain.
Too often, product management commits to cutting a feature, and then caves in when paying customers complain. It's best to know in advance which category a feature really falls in.
I think it's important to separate feature improvements from a technical rewrite, ideally in the rewrite you mostly just make things work the way they did, sometimes you might fold a feature improvement into it but if you come out of the rewrite with a more stable product that has about the same usage stories you should consider it a success.
Sometimes you will want to fold features into a rewrite (remove prompting the user to confirm X twice) sometimes this will ease development and be worth it but other times it'll pay off to just retain the old functionality but add it to a list to be user tested later.
Once the tech is solidly over then take a swing at updating the poor UI, do it agiley so you can back out of changes that the user base rejects since (at least within my more modest usage studies) not everything people depend on comes up or gets reported. I'd much rather rollback a design feature branch then have users get change fatigue when you're forced to rollback your new shiny rebuild and the whole project ends up being shelved.
You hamstring the product to make a feature work one way then find out that what they really wanted would have been easier to implement but they never asked because they thought that would be harder.
Almost all feature requests are asking to implement a particular solution rather than asking to come up with a solution to solve a particular problem.
The way I try to solve this is to ask "why?" as many times as it takes to get to a fundamental business problem. Then it becomes easier to have a user story (as opposed to a specific feature request) and come up with other solutions that can be measured against the story. It also helps to keep the product focused, as it's easier to tell when a story is not for your target market vs a feature request -- and then you can make a conscious decision to either stay away or deliberately expand to that market.
When this happens a couple times you start sounding like Honey from the Incredibles.
It’s difficult not to sound combative when they say they want a convertible but you have to wheedle out of them that they want to take a proverbial road trip through monsoon season. No, you get a Land Rover with a snorkel or you wait, pal.
So bossy and difficult. Why won’t you just give us what we asked for? These meetings would go so much faster.
I once worked on a feature that apparently lots of clients were asking for. It took 4 weeks to implement. Went to production. Never heard anything of it. 2 years later if we could modify the feature to work for another usecase. I looked at the database. The feature had never... ever... been used.... rows returned = 0
That's why its important not to believe what customers and product managers say what features they want. I have had a ton of occasions where it turned out that what they really wanted was totally different from what the devs were told.
Yeah, I always ask our user stories to have a 'background' section explaining the problem and reason for the feature request so it can help us understand the importance and purpose of the feature.
There have been a couple times where I’ve tried to use a feature that should have been awesome but was terrible then it got pulled in a newer version of the product. It was incredibly frustrating to wait for a fix that never came. Data on what’s used is good but you need to get feedback about what sucks to go along with it.
Feature parity is the reason why some of the projects I've worked on caused #2 - can't get customers to switch if there's no parity yet. The MVP for some of those projects took a year to get to. Mind you it'd probably have been 6 months if they didn't opt to go for a microservices architecture.
A bigger red flag, in my experience, is an unwillingness to even consider dropping any features. Often combined with a desire to add new features during the rebuild. Always goes wrong.
The way to distinguish this from the #5 situation in the article is to ask if you're dropping features because they're hard or because nobody uses them. The former is a red flag; the latter is a green flag. Before you embark on a rebuild, you should have solid data (ideally backed up by logs) about which features your users are using, which ones they care about, which ones are "nice to haves", which ones were very necessary to get to the stage you're at now but have lost their importance in the current business environment, and which ones were outright mistakes. And you should be able to identify at least half a dozen features in the last 3 categories that you can commit to cutting. Otherwise it's likely that the rewrite will contain all the complexity of the original system, but without the institutional knowledge built up on how to manage that complexity.