Hacker News new | past | comments | ask | show | jobs | submit login
“This project will only take 2 hours” (utk.edu)
398 points by greenSunglass on Nov 10, 2021 | hide | past | favorite | 271 comments



I've gone down this rabbitole before, and yes, there are things that take two hours, and there are things that take a lot more. The problem is the scope - ie. are you trying to do what you said you'll do (miniature software to log urls), or are you adding 100s of requirements to the scope (gui, filters, privacy,...).

We had a bunch of .xml files containing some data, and someone wanted a .csv file... sure, an hour max, and it's done. Fire up some perl, loop, xml->hash, take the data needed, print line by line, done... less than an hour.

"Hey, can you add just this and that, and separate folders for this and a total and rolling sum?"

...yeah sure... hour or two.

"and this and that?"

...sure, another hour...

"Can this be made for that department to use.. just add a simple gui"

...and we've gone from an hour or two, to weeks or months of work. Nobody wanted a gui application made for "normal users", with all the checks and verifications and documentation and everything when you made a request... the scope was a simple one-off script, and that actually took an hour or two. This is like requesting a bicycle, than wanting to add a hitch for a boat trailer and making it higway-legal. And the added problem is, that your script never intended to do "all of this", and you either have to rewrite it from scratch, or you software looks like "the Burrow" (from harry potter).


I vaguely remember a rant some famous Linux developer went on about how setting up printing in Linux requires a dozen parameters, half of which are arcane nonsense that the printer manufacturer themselves probably doesn't know how to set up correctly.

Meanwhile, on an Apple or Windows computer it can be literally just plug and print.

The difference is that hiding those arcane input parameters takes work, and a lot of it. There was a quote that it takes 10x as much effort to remove an input parameter than simply leaving it in there and letting the user fill it out.

A lot of people don't get this, and think the simpler-looking software is simple, when under the hood it is probably doing model auto-detection, driver downloads, protocol negotiation, and even firmware updates on the fly!

I face this all the time at work. "Can't you just make a single button to run this task so I don't have to figure out how to use this complicated script?"

"I can, but not with this budget..."


Ha! Printers!!

I worked on an OS called CTOS back in the day. It was difficult to set up printing, because there was no feedback. It worked or it didn't, and there were queues to configure and services to install and network connections...

Somebody in a trade rag wrote "Its easy to print on an Apple, difficult on an IBM PC and impossible on CTOS". That smarted.

So a guy named Tom Ball (later the Java Swing guy) wrote a Prolog script that somehow could just tell you if printing was working or not, and what to do about it. Never figured out how he did that. BUt our trouble tickets went from something like 60% printer-related, to noise. Just like that.

I remember after all these years, and try to make software foolproof. For the customer service folk, because they matter.


Early on as a dev, I made the mistake of doing exactly what I was asked. Basically I learned that user control was better than automation in some cases. I had to write a script to select batches of files from a larger number for translation/annotation with certain proportions of various characteristics. There was some amount of self balancing involved, and I wrote a cli program that did this well. I took a process that previously took days (often communication lag) dozens of times per year and brought it down to milliseconds.

The problem was that the humans involved didn't really want what they asked for. They wanted control. They wanted to select a batch with certain characteristics, then another batch with others. They wanted to pass a list of files that must be included, a list that must not. They wanted audit functionality, they wanted multi-step selection, to mark files for auditing, translation, and different annotations types, to select a batch from one annotations type for another.

At the end of the day, this ever evolving script worked out very well and took away the need for a programmer doing complex sql and filesystem queries. There was still a human involved, but that was because the human wanted to be involved. The process taught me about users wanting something other than what they ask for, scope creep, and also showed me the value that could be generated via a fairly simple script created in collaboration with the user.


Printers are plug and play on Windows? That's not my experience. I have constant problems at work. Settings resetting, stupid printer driver GUIs nagging about ink, prints coming out wrong and making me go into the detailed settings to fix it. Rare are the days where I can just print the stuff I need with no fuss. I'm fortunate enough to be able to solve problems on my own. Coworkers are forced to call support and waste hours and hours of their time because of these printers. The scanning situation is even worse.


I've been working with printers for 35 years and they still find ways to destroy my soul.

This week I discovered my Canon printer has some way to detect "plain [white] paper" and absolutely refuses to print on the high quality "cream" coloured paper instead. No matter what tricks I tried in swapping the paper, as soon as I put the better paper in the tray it would freak out.


Some printers use an infrared sensor to detect paper width. Those tend to freak out on weird paper colors, though I've never had the issue with cream before. Only darkish colors like purple and the like.


I do a little tech support for my neighbors after hours. Printer issues (on Windows) are an extremely common complaint. All kinds of random things can and do fail.


Printers are plug-and-play on Windows, macOS, and especially iOS (AirPrint) if 1. you're always printing PDFs at 100% scale, single-sided, black-and-white, and 2. you're printing against a printer where that particular configuration is its wheelhouse (i.e. a laser printer.)


I believe you’re referring to ESR’s essay http://www.catb.org/~esr/writings/cups-horror.html


That's the one!


Yeah it takes a LOT of work to hide it [1]

Also: For some stuff I want to be able to use the "arcane" parameters for some really obscure task where it is just the right flag or something.

[1] https://www.youtube.com/watch?v=HllUoT_WE7Y


> I face this all the time at work. "Can't you just make a single button to run this task so I don't have to figure out how to use this complicated script?"

"Just" is a four-letter word and its use is forbidden. Replace it with "in addition" and change how everyone thinks about the request.


> The difference is that hiding those arcane input parameters takes work,

I'm reminded of the first time I tried to get wifi going from a CLI-only NetBSD install. wpa_supplicant something something.. I eventually got it working but it was sort of a nightmare compared to how you connect wifi in an OS UI.


I have not experience this fool-proof plug and play on windows or mac os. I just last year replaced a printer because it wasn't working with my wife's Macbook Pro, it worked one day and just dropped of the face of the earth the next (was able to print over USB from my linux laptop still). The printers were I work constantly have issues working with the windows PCs.

Moral of the story, printers are a necessary evil and what is wrong with the world.

EDIT: For future readers if you don't see the responses. The home printer was a Brother.

Also, this is not a statement that Linux is better. Think of it as the experience of using printers in general is pretty shitty regardless what your computer is running.


Printers used to rum much better on Windows back in the day when the manufacturers developed drivers with the intention to make them run well. I've never seen any one doing the just "plug, install driver, run", except for shared printers maintained by a contractor. But they used to run better than Linux.

Nowadays the main advantage of Linux is that it won't install the manufacturer's driver. Anyway, the arcane settings aren't required anymore.


Was it a HP printer?

HP last year managed to accidentally revoke their macOS code signing certificate. I'm not sure how much software was affected, but as far as I know all HP software for Mac, including printer drivers, was broken (CRLs are updated automatically even if you have automatic software update disabled)


Nah, it was a Brother printer, I replace it with a cheap HP printer. The Wireless on that one stopped working, I've just given up and my wife uses the cable to print. Well luckily the HP is still playing nice over a USB cable.


If you still have the Brother printer, try disabling IPv6 on the Brother printer.

I remember this issue from a while back. Some update happened, and then wireless printing stopped working on some iOS devices. It happened to a colleague's macbook, and disabling IPv6 on the Brother printer fixed it for us. I can't remember where I found the initial advice, but here's someone else mentioning it: https://discussions.apple.com/thread/250584298


Any sufficiently large discussion of software complexity will spawn a large sub-thread about printers.

Any sufficiently large discussion of printers will beget a number of ad-hoc support sub-threads.


Printers were the bane of my existence when I was responsible for maintaining a small business's computer stuff. They all ran shitty Windows PCs with varying versions. One day one of the accountant's computers started crashing on boot, and after a hell of a long time, I found that some Epson printer driver had set a registry key that didn't revert upon uninstallation, and that crippled the machine.

I wonder if in your case, your old printer relied on either 32-bit kernel extensions (I don't know if that's a thing), or if it relied on one of the deprecated Kernel Programming Interfaces such as IOUSBFamily. I also wonder if a printer driver falls into a category of deprecated extensions that are silently reported by the OS, because most do pop up an alert of some kind.

https://developer.apple.com/support/kernel-extensions/


The single most problem-reducing maneuver I've tried with home network printers is getting the printer off plain DHCP and make it consistent (set a static IP manually [outside your DHCP range] or by DHCP reservation).

After sorting this out in the router, then add the printer to the various desktops/laptops/tablets/phones.

Maybe I'm imagining the improvement, but I don't think so (and it can't hurt).


> The difference is that hiding those arcane input parameters takes work, and a lot of it.

One of the things I say often, too often, is, "Making it look easy is very very hard."


When you make one part simpler, you move that complexity to another part.


“All user input is error.” - Elon Musk


[flagged]


For those who also weren't sure, I learned that this is an actual quote from Mr. Musk to a U.S. Senator and Senate Finance Chairman.

https://twitter.com/elonmusk/status/1457497438474981384

I infer the point to be that Mr. Musk trolls so frequently and thoughtlessly that it's impossible to tell whether he believes the things he's saying, which is why quotes from him are mostly worthless as guiding principles.


See, I read the article's problem description and think, "Okay, a proof of concept is going to take 2-4 hours. A shippable product is probably going to take 5 to 10 times that long."

IMX, it's not even necessarily feature creep. It's failure to stop and adequately understand what constitutes a minimum viable product. Not viable to sell on the marketplace. Merely viable to deliver to another user.

There's a huge mistake in software development to think that the hard or complicated part is making the algorithm or understanding how your program will interact with it's own data or external data or hardware. That's seldom the actual hard part. The hard part is making it useful to a human being using the system.

If it's just for me, I can use a proof of concept that I wrote in perpetuity. I can deal with how janky it is or the obtuse error messages it spits out. If anybody else has to use it, though, there needs to be a mental model for interacting with the software that is intuitive and comprehensible to the user that doesn't require them to have a comprehensive understanding of the underlying code, clases, and interfaces.

This is the actual hard part of software development, and the bad part is that it's typically the part that nobody likes to do.

That's the lesson the professor is trying to teach here. It's not how to write a program. Nearly any fool can do that. The lesson is: how to author deliverable software.


With scripting, often you don't have to deliver it to another user; you just have to get the data from them, run your script on it, and give them back the result. It only has to work on your machine, because that's the only place it'll ever be run, and only once.


You seem to be complaining that users/stakeholders have actual needs that may not be captured by their initial naive statement of the problem/solution/requirements.

Indeed, they almost always do. Most users/stakeholders aren't good at technical specification of a solution that will meet their problem. It's part of our job to elucidate that in interaction with them.

And then, even when you are good at technical specification, sometimes it's hard even for domain-expert developers to fully understand the problem up front, without iterating. This is one of the useful observations attributed to "agile" (but of course not unique to it).

These are some of the reasons that projects often take longer than initially estimated, sure. You can complain about them, or complain about users being bad at specifying their problem/solution, but it's just a fact of reality, complaining about these facts won't make them go away, and part of doing a professional job is dealing with them.

Which means being very careful about assuming a feature/task is "easy", and remembering to do more investigation of the problem/domain/requirements than just taking a naive "easy" mission from stakeholders at face value -- which is what OP is about.


To copy from my other comment:

My exact case was some reporting for some project (15 people, 36 months, 1 xml per person per month), to throw into excell and draw one graph, for one PPT, a one time deal, for one project manager to show the numbers on the report.

The end project, if I didn't stop the idea, would be a black box, where you would put any number of random xmls, and get exactly what the accountant wanted with one button... so an impossible task.


Yes, scope creep can significantly change the time requirements or even the entire, shall we say "scope". But I think it's just as important for us as developers to be aware of that on the initial request, in order to suss out the actual requirements.

What does the process look like when we ask back, "are you going to need to have a total and rolling sum? should we think about making a gui so other people can use it too?" Engaging the requests early on not only helps our planning, it helps the "client" figure that out sooner than later.

I think I've been successful in my career because I've been able to listen to a client's request and then help them figure out what they are actually asking for instead of taking it at face value. That can be easier said than done on an internal team, but it changes the quality of the product and dynamic of the team significantly.


Indeed. Much of the time, "scope creep" is actually just discovering what the requirements actually are. It's not that the requirements change, the requirement likely existed all along - it's just that no-body realised it. Iterating through feedback loops where you build something that meets the original ask, only to refine/change/add/remove things as everybody discovers that what they wanted is not in fact what they wanted is not an unreasable approach.

> I think I've been successful in my career because I've been able to listen to a client's request and then help them figure out what they are actually asking for instead of taking it at face value. That can be easier said than done on an internal team, but it changes the quality of the product and dynamic of the team significantly.

Sometimes you can explore the actual scope by skillfully working with the client before you build anything. Sometimes, however, you can only learn what the actual requirement is by building the wrong thing first.


My exact case was some reporting for some project (15 people, 36 months, 1 xml per person per month), to throw into excell and draw one graph, for one PPT, a one time deal, for one project manager to show the numbers on the report.

The end project, if I didn't stop the idea, would be a black box, where you would put any number of random xmls, and get exactly what the accountant wanted with one button... so an impossible task.


> "...just add..."

"It's just..." are the two words you never want to hear :)


Simply just add now.. each word increases the difficulty level by one increment.


This is hilarious to me because this is exactly what I get from my clients and I would expect nothing less from a college professor. "Just build me a URL logger". And then after saying "sure, we can knock that out quick" they ask for pause ability, title fetching (which is not as trivial as the author makes it sound if you want to handle edge cases)https://stackoverflow.com/questions/64051968/retrieving-titl..., rich logs, log management (deleting records), GUI for logs, cloud sync.


In a ideal world- adding a simple user-gui, would be simple. As in- all edge cases and special cases, will not be handled, but just result in a error - that comes along with a suggested fix googled on stackoverflow. The "simple" program, teaches the users to program. Teaches them to leave an-alphabetism behind.


I don’t agree. Users can be in a hurry because they have to catch the train. They can have only one hand free because they’re holding a baby. They can be mentally handicapped so they’re cognitively unable to google stuff. They can be depressed and find themselves unable to accomplish even small tasks. Or they’re blind and there’s no accessible website that explains the error.

I think all those people have the right to use your app.

There’s a point in drawing a line somewhere. But please, don’t make your app hard to use /on purpose/, not even with good intentions in mind.


Those that have no train to catch, that have no babies to hold, that are not mentally handicapped are not depressed can fix stuff and share it with those who cant.

And at work the "users" often have plenty of time. And even more, once they learn and automate half there job away. The world is not obliged to carry anyone along.


> making it higway-legal

this is the crux imo; an experienced engineer can hack together a compelling MVP of almost anything technical in a reasonable amount of time, but getting that same code in line with regulatory frameworks and the expectations of the end user is what separates a weekend script from a scalable business


As long as the client is accepting of time-T-for-feature-F, I quite like this mode of development. Whatever is built shows value at every step. As long as a suitable programming language is used to allow for the changing executing environment (in your example: a GUI).


I often think about this XKCD from years ago: https://xkcd.com/1425/

"When a user takes a photo, the app should check whether they're in a national park..."

"Sure, easy GIS lookup. Gimme a few hours."

"... and check whether the photo is of a bird."

"I'll need a research team and five years."


Reminds me of how some companies give take-home assignments to job candidates.

> Here's a task that takes a day to do sloppily and at least two days for you to show your best work. We want to be respectful of your time so please don't spend more than 2 hours on it.


I wonder if that's why I wasn't a good fit for a recent company I interviewed with using a take-home assignment... bear in mind it's been at least a decade since I last took a take-home, so things might have changed.

I received the task on Friday evening, I spent the week-end thinking about the problem. Monday I had actual work and on Tuesday, I spent a day coming up with a shit implementation. Then, I started improving it for two days until I think it got in a good shape. I basically wanted to transmit that I can start from the base, add functionality and then iterate on the problem by documenting, refactoring, adding tests, evolving the code base etc.

So I'd say I worked for around 24 hours over 168 hours and the feedback I got was:

> Thank you so much for your interest in joining <redacted> and the time you took to submit the coding challenge. In reviewing your application though, I have decided not to invite you to the next round of interviews as I believe there are other candidates that are a better fit for what <redacted> needs right now.

This is turning into a subjective rant, part of me wants to delete this but I'm just gonna go ahead and finish it; When I was younger, I used to love take-home assignments because it felt better than just doing algorithms, and then I used to get proper feedback, what I did wrong, what aspects I could improve in the future etc. This feedback gives me nothing. Just the realization I wasted time.


You have to lie about take-home assignments. That is the accepted way.

I had a take home test that was to last 2 hours, and so I promised myself I would spend no longer on it. I provided a complete log of what I did minute by minute for a take home test for the last test I did as a defensive measure...

There was no break, no pondering, this was flat out knowing exactly what needed to be done and just typing constantly....

0:00 - started cloning

2:37 - finished cloning

3:11 - npm install

3:53 - localhost working

7:24 - removed timeout (intentional bug left in)

10:41 - Got images returned from the server (npm run serve)

13:27 - CSS started - thinking about design

22:26 - Grid for desktop

36:16 - Included user info

49:56 - responsive

56:37 - Added performance section

1:07 - Name form

1:21 - Email validation

1:35 - Most fields done. Need DOB

1:42 - DOB done

1:53 - form styling

1:57 - tidying up

2:01 - Saved this file

I didn't end up getting the job including feedback such as...

- Did not upload job search indicating repo to GitHub

- Didn't use `specific css attribute`

- Grid is rudimentary (the 11 minute one I made)

- HTML could be more semantic

- Form validation for name allows numbers (actually a programming falacy that names can technically be anything)

- Complaints about having multiple classes in one file.

I'm wondering exactly where I could have fit these in?

The week before I spent 4 hours on a task and delivered it within 12 hours of the assignment. I got an email with a single paragraph. `Unfortunately, after reviewing, the team has decided that they won't be moving forward to the next stage of the process with you`

These tests all have unrealistic expectations. I used to enjoy doing them for the learning experience, but now, it's just a solid graft with zero downtime coding the same app time after time.

So whenever I chat to a recruiter about the recruitment process I always drill down into the take home test, and the process of doing that let's them know that I won't be doing any take home tests. You have to simply question it sometimes and you can get it upgraded to a pair programming assignment which if you are honest, and have experience works in your favour.


Oh man that is a truly fantastic timeline. Thanks so much for sharing it. It really clarified for me why I can’t ever seem to get the take-home assignment done. I care too much!

I’ve withdrawn my application the last few times I’ve gotten task home assignments: I’d get to the 2 or 3 hour mark, realize I’m still working on clarifying my documentation (because I want to communicate that I consider this essential to a minimal product) and haven’t finished the feature(s) yet, call myself a shitty developer who is apparently an imposter, and then call it quits.


I've probably burned some bridges this way, but whenever this has happened to me I've sent strongly worded messages to the recruiter and hiring manager. It probably goes nowhere, but I think it's time that we as an industry stop putting up with this shit.

It's also a great indication that it's not a company you want to work for anyway, but I don't have the stomach for accepting this level of disrespect towards applicants.

The only people who benefit from this situation are the people who have the luxury and freedom of 20 hours to spend on take-home projects. And even when I was that person once, I still hated it.


The folly is that if you did upload to github, they would have wanted to see steady commit history as well, which would have given away how much time you actually worked on it.

The key is lying because you already had a similar code base to use, most likely from interviewing so much you'd seen it before.


I actually didn't get a job for over-commiting. I hit a point where things were working, and wanted to be able to git-diff in VS Code, and they declined my application due to that.

I was going to push as well, as I wanted to make sure that was set up right, but they associated that with 'pushing untested code to production'


yeah its completely arbitrary and they don't tell you what they are going to be scoring by

but then they pretend its a meritocracy


That's seriously impressive work – though it occurs to me that you didn’t account for all the time spent thinking about what you were going to do in the two hours. And the feed-back is hilariously bad.

I’ve often wished I’d focussed more on coding over system administration but reading accounts like this makes me happier about my recent career choices.


I had a take home a couple months ago. I'll spare you the details but if the output was to be non-hacky it was 4 or 5 days of work. I put in ~20 hrs, sent it back and said, "Here's where I'm at. I'd love to finish it but it's not reasonable to expect so much time if the final answer might be 'No thanks.' I'm not into such waterfalls..."

To this I'll add, I have a saying: "How you hire, is who you hire."

40 hrs? Waterfall? 2021?? And we all have been led to believe dinosaurs are extinct :)


Same here. Quite a long time ago I got a take-home assignment. It's about setting up a router on a linux-based machine. I was going with a design that focused on maintenance and understanding, so I decided not to use iptables directly. After a few days I gave up, because there was an issue with the upstream distribution that I was not aware of. I actually had a very good setup after the _deadline_, but it's too late.

I would never do take-home assignment again =))


I always do better with take-home assignments than I do with a whiteboard or live-coding interview.


I only accept paired assignments. If you are going to use my time, the least you can do is offer an equal amount of time on your side.


How does this work? Who is the other party (someone from the company?) and what do they contribute?


They contribute the evaluation and time-boxing


"Thank you for your time, we will keep you in mind. Next!"

Point of programming assignment is to see if you can actually write code. No one is waiting for perfect implementation, just working code that fills the spec, documentation, and tests.


"here's a computer, please implement a function in $languageOnYouCV that receives two words and returns true if they are anagrams of each other, false otherwise, if time permits add some unit tests" seems to fulfill that need just as well, and should be quick enough that being in the room or not is more about what makes the candidate more comfortable and gives the most insight for the interviewer.


That sort of problem is about the right level of complexity for a candidate to show they can think about a problem, provide an implementation, and discuss alternatives. Something they should be able to do without ever studying or grinding LeetCode.

However, there's always room for the interviewer to muck things up.

For the anagram problem, there are a range of solutions, including elegant & inefficient, clever & efficient, and robust & efficient. Do you dock someone depending on which they reach for first?

There's limited time, and a candidate may assume you're mostly interested in "how they think" and thus focus on the algorithm alone. After the interview, do you run the candidate's solution against test cases that you never mentioned, docking them for "missing" matters of casing or non-alphabetic characters? If the candidate themselves wrote unit tests, do you dock them for missing test cases you felt should have been included?

Do you dock them for not checking for invalid inputs, even though the candidate might normally work in a type checked variant of the language that wouldn't have permitted that anyway?

So many possible hidden assumptions. I know I've been rejected for similar unstated expectations in the past, and in retrospect, I know I've been guilty of doing the same to others.


At that point it's not a coding exercise, it's a full blown case study.

This isn't a programming contest and your company isn't ACM


That is where we interviewed as an industry 10 years ago. Then people noticed a few holes (what, they don't know fundamental $x!?) and those were filled. List vs map, complexity, tests, trees and traversal, http api stuffs, unixy things, and then into networking fundamentals, distributed systems stuff, etc.

10 years ago, nearly no candidate pulled a distributed queue solution in a design interview and now it is a standard answer in their pocket.


Is the candidate allowed to use google?

Just asking since it's a very easy task but I'd wanna doublecheck how to reverse a string in that language.


anagram, not palindrome.


It helps me on filtering out shitty companies. I don't want to work in a place that does not value my time. And there are plenty of companies to work with.


The take-home assignment seems to me a slightly broken system; it's easily gameable because there's zero supervision.

Let's say there's two candidates for a position who are both asked to complete the take-home assignment, Candidate A and Candidate B.

Candidate A is the superior candidate and spends only two hours on the task (as requested), and Candidate B spends 2 days. Candidate B submits the better assignment and is more likely to land the job as a result, despite Candidate A being the better candidate.


We do a take-home assignment for junior devs where I work. The actual coding knowledge is pretty minimal. Few people fail based on the code alone. What we're actually looking for is if they can follow directions. A surprising number of people can't read a list of requirements and make sure they all get implemented.

For junior devs, we don't actually care if they do it in 2 hours, 2 days, or the whole 7 days we allow them. There's no bonus points for turning it in the next day, and I don't think anyone has ever actually done that.

But we're at the point that if they fail a single point in the requirements, we stop looking at them. This was a really hard decision, because there are candidates who are really nice and seem to otherwise do good work, but we've hired some of them and they inevitably continue to skip steps even in simple tickets. We end up letting them go after multiple warnings and months of wasted time, and having to do it all over again.

Of course, if you can't code you aren't going to get in either, but that actually seems to be a lower bar than following directions.


Very insightful and true. Even during a conversation, many people "translate" what they hear into something else, and will quote back, a few seconds later, a totally different sentence and think that's exactly what you said.

Kids in school often don't read assignments properly either, and skip everything until the first question. I don't know if they think it saves them time or fear there's a trick somewhere that they will avoid that way.

In life in general, people like ambiguity; it helps to smooth things out. But in programming it's a problem.


I think it's a profound fear of ambiguity that drives these behaviors: the urge is to move past the 'understanding' part by finding some single interpretation and then move on to the 'doing' part, which is the satisfying part. I think formally writing down the specification for any kind of work starts with holding all the ambiguities in of it in your head. In this case, the approach depends on whether the student is more interested in 'computer programming' or 'software engineering'


From a hiring standpoint, the problem imo, is that employers set the timebox thinking that it means something when comparing candidates' solutions. I couldn't care less whether somebody spends two hours or two days on it. I want to see the best solution somebody can produce. I have engineers that can do things quickly and others that can do them more slowly, and both are infinitely more valuable to me than someone who can't provide a quality solution under any circumstance. In my experience, if you're filtering out the slower engineers that produce quality work you're going to be in for a very long hiring process and miss out on a ton of great engineers.


When I've interviewed using homework like this, it was with the sole purpose of bringing it along and talking to it. That won't be gamed. And, by the way, that lends itself also very well for bringing whatever fitted in the time box, and having a discussion on the meeting about the tasks that remain, and about their prioritization process. You may like it when you see less production code, in favor of some tests! People coming in with a full-fletched solution, and then to tell me they didn't have time for a test, what am I to make from that?


This is the right approach. You don't disqualify based on the take home (unless it doesn't work at all).


Unless it's a nonsense solution maybe, or you clearly demonstrate that you don't know what you're talking about. But otherwise it not quite working isn't even the greatest obstacle. (Though those tests working would be nice...)


I'd say selecting the candidate who will work for 2 days and only claim 2 hours is the system working perfectly.


Sounds like there may be a niche for some kind of industry standard automated logging system.

Nothing complicated - shouldn't take too long.


Are you joking? Should candidates install spyware?


Seems obviously a joke based on the project from the article


If you have 2 hours to submit the task and you submit it after 2 days you are already out


So... you select for people that can get home within two hours and still have enough time left over for the assignment, as well as people that are willing to do the assignment in the parking lot?


Oh, we don't even meet face to face before you write code. We setup a time where you have X hours open and then we send you an email with couple different assignments to choose from. Then we expect to have your code email back to us within X hours.


This might be the worst technical assessment format I've ever heard of. I really don't get the reason for adding time pressure to this. You're combining the worst elements of take-home assignments and whiteboard interviews.

Why not send people the assignment and ask them to return it within a couple of days? What are you gaining by limiting it to hours and turning it into a remote exam?


This way it ensures you only spend 2 hours on it without being penalized. If you are free to spend any amount of time you can be sure someone spent 40 hours working on it, you really don't want to compete with that. So another person would say that this combines the best from take home and white board, you don't have to travel anywhere for the interview, you get to do a project rather than algorithm, and it doesn't waste a lot of your time.


A 2-hour project is hardly a project. It's still a contrived problem with limited scope. I don't see it as an improvement. And you're assuming that the person who is assigning the task can estimate it correctly (this goes back to the topic of the OP).

I suppose people's preferences vary a lot. Personally, I would turn this down. The format of "do as much work alone under pressure" is extremely offputting. I'd much rather do a whiteboard interview where the problem is well scoped in time and I can discuss my solution with the interviewers. I also wouldn't mind a take-home that actually takes 2 hours that I can do whenever I please - but asking for the work to be returned within hours is a dealbreaker for me.


…but they said they schedule the 2 hours for whenever you please, so the only thing being limited on here is it being an honest “2 hour project” rather than a “10 hour project that we expect you to lie about.” You still get to do it whenever you please.


I don't know what kind of assignments you think we give people, but they are things that anyone who can program can do without any thought well under an hour. Whole idea is to weed out people who don't know how to program or don't know how to write unittests or any kind of documentation.

If you are having time pressure challenges with our assignments we also don't want you.


As a potential employee I would never agree to this. If you want me to put in two hours, I do expect to have progressed further into the interview than "you are one of 400 candidates, here's your test!".


Experience shows that many people straight up lie about their programming skills to a point where they can't write a FizzBuzz to safe their lives.

But also if the person is so unwilling to co-operate that they can't so basic programming skill then I guess it wasn't meant to be.


Yup, the worst aspect of it isn't that it takes a long time to make it work right but the feeling of imposter syndrome and the fear of expectations if one manages to get in.

Like, if you expect me to write 1K lines of code in 2 hours as I read documentation, debug, test it - then god knows what the expectations are at a full day at work.


> A few students came to me asking if I had any ideas for a software project that they could work on outside of class.

> I explained an idea for a utility that I had been wanting: A desktop program that monitors my clipboard for URLs and logs them automatically.

I think this person misunderstood the students, the student wanted a project to practice programming, not a project to practice project management. The programming parts of this would take 2-4 hours, it isn't a terribly hard thing to code. All the project management parts would take days or weeks or even months or years depending on how polished you want it to be.

Giving them a project management project because in your mind project management is more important just to prove a point doesn't help them at all.


> The programming parts of this would take 2-4 hours

This is, literally, the error that the entire article works on dispelling.

Anyone can write a rough command line tool that monitors the clipboard and prints to stdout. Writing a usable GUI application is a lot more work. That’s not project management, that’s coding work.

The ratio of project management work to coding isn’t 10:1 or even 100:1 like you’re suggesting.


> Anyone can write a rough command line tool that monitors the clipboard and prints to stdout.

There's a good chance that this will solve 95% of all needs. Most of the features suggested in the article beyond the original premise are unnecessary.


>There's a good chance that this will solve 95% of all needs. Most of the features suggested in the article beyond the original premise are unnecessary.

The teacher is the client. The client is the one who will better estimate what is necessary or not.

If you own a company and someone wants to contract you for work, you start explaining how what the client says it needs is not necessary and how he can manage with less than he demands?


> The teacher is the client.

This is absurd. The students are the client. They asked the teacher for ideas for a hobby project. The teacher provided one, then followed up with a question about an estimate. Naturally, a student gave an estimate of how much time it would take to produce the product that they themselves were envisioning.

It’s not unreasonable for the teacher to point out additional features they might want to do, and how that affects such an estimate, but it IS unreasonable to call the original estimate “wrong.” It’s not wrong for a particular scope.


> If you own a company and someone wants to contract you for work, you start explaining how what the client says it needs is not necessary and how he can manage with less than he demands?

Absolutely, you don't just build the first thing that the client asks for. And certainly not as a single main deliverable.


Every idiot with a stupid idea for a piece of software is a client, but vanishingly few clients are good customers.

In different words: there are tens or hundreds of thousands of people who can describe software that would be very useful for them, but very few of those ideas would be profitable to fully develop.

Opportunity cost! Every stupid idea implemented snuffs out what could have been a good idea.


If a client wants to pay my going rate for development work, then it's profitable.

It may not turn out to be profitable for the client, but that's not my problem.


> you start explaining how what the client says it needs is not necessary and how he can manage with less than he demands?

Sometimes, yes, it is necessary to coach a client. It takes trust, but it isn't unheard of.


Especially if you're an in-house consultancy


Studying Computer Science is about learning to program (among other things), not learning what it’s like to have clients.

I’d have been incredibly disappointed to have a teacher like that.


Given the circumstances it seems reasonable to assume a programming teacher can use a program without it being foolproof enough that my mom can use it. Further, if they wanted more features (usability-wise or otherwise), they are also knowledgeable enough to ask for them, unlike an average user. Still, the students even asked clarifying questions, the author wrote. And yet they came up with a 2-hour estimate.

Of course nobody is going to come to a consultancy firm to ask for a command line utility that technically solves the problem. That's an entirely different situation you're describing.


> The teacher is the client. The client is the one who will better estimate what is necessary or not.

If you work in a consulting company, it is "good practice" to "convince" the client that he needs a lot more features so that you can sell him more man-days.

On the other hand, if you get payed a fixed price, incentives are turned around.

Incentives matter.


> If you own a company and someone wants to contract you for work, you start explaining how what the client says it needs is not necessary and how he can manage with less than he demands?

Yes, it happens all of the time actually. Most clients don't come to you with clearly thought out solutions, they come with poorly articulated problems, and your job is to explore the problems they want to solve until they are well understood, and then devise a solution with the lowest cost.

Of those rare, rare clients that do know exactly what they need, they still have multiple constraints beyond those needs. For instance, budget. The client wants X within firm budget Y, X cannot be done within budget Y, therefore you make the case that either the budget must expand or they can make do with X' which only does 90% of X and that will fall within budget Y.


Yes! If you blindly do what any client says, you will waste a lot of your time and a lot of their time.


Reaching to first 90% is easy. Remaining 90% is the hard part.

As the saying goes: The 90% of the project takes 90% of the time. Remaining 10% takes 90% of the time.


One of my CS teachers actually said "The programming is always the easy part. Designing the features and the interface usually takes 95% of the time."


Your CS teacher never looked at a web browser from the inside, or at other actual big scale software.


Still, the teacher is right for most user-facing software that most of us will ever write.


I'm not convinced. If designing the features and interface weren't so challenging, why do the user interfaces and other development APIs to web browsers still change 30 years after they were first invented?


A web browser might have a higher density of advanced algorithms (than most software, but only because it has unusually extensive, varied and complex requirements (e.g. a number of complex standards like HTML, JavaScript, CSS, WebGL) and unusually high quality standards (foolproof, high performance, standards compliant, secure...),


Or maybe they were exaggerating for emphasis, as humans do.


Except it isn’t supposed to log things that aren’t urls being pasted into the browser.


>being pasted into the browser.

Not just the browser:

>links that I send to people across all the different messenger apps I have installed.

The article does say it might be useful to log where the URL was copied from, and potentially also where it is pasted to:

> I would also expect to know where the URL was copied from. I.e., whatever application is in focus when the clipboard is modified. It probably isn't feasible though to track everywhere it is pasted (maybe it is actually...).


A naïve url detector isn't hard to write. A good url detector is difficult not because it is hard to code but because it is hard to understand what behaviour a user would expect.


So? In any decent programming environment, checking if a string is a URL is a one liner


Not without false-positives.

E.g., is "foo.bar" a URL? Maybe. But it could also be a filename. How do you know if it's a "real" URL or not?


In The Good Ol' Times, I would have replied "let's check the TLD", but now that list is basically trending to include the entire English dictionary... so I guess the only response these days is "ask DNS". So we've already gone from "pattern-match a string" to "pattern-match then make network calls", which (as anyone who's done any network work knows) also requires managing a bunch of possible/likely error states (offline, timeout, partial response, response format, etc etc). So yeah, nothing is as easy as it looks.


Can't quite do this most of the time due to privacy concerns. Leaking random URL-looking text to the network is a big no.


So now we have to ask for user consent (installation time? first run?) and respond accordingly, adding another piece of UI... but it will only take an hour, right...?


You probably also want a setting to detect slow networks and disable it there in case the user is tethering etc.


Strictly speaking, a URL begins with a scheme followed by a colon.

Schemes can be registered with IANA (or not), and everyone knows the most common half-dozen or so. People often forget "mailto:" and "tel:".

The project brief asks for one thing, but the practical implementation probably requires something else.

This is a good lesson to learn, and this is how two hours becomes two days, becomes two weeks.


It isn't a URL. As per RFC 3986 [0]:

> The term "Uniform Resource Locator" (URL) refers to the subset of URIs that [..] provide a means of locating the resource by describing its primary access mechanism

Since "foo.bar" does not describe an access mechanism, it is not a URL. Yes, you could make the argument that "foo.bar" is a relative-path reference as described in section 4.2, but that is only used to:

> express a URI reference relative to the name space of another hierarchical URI

So "foo.bar" can only be considered a URL in the context of another given URL, and in your example there is none.

[0] https://datatracker.ietf.org/doc/html/rfc3986#section-1.1.3


I don't have to worry about that, because I'll pick a language that offers a `URL` object or something similar, and which handles the validation for me.

Additionally, if foo.bar were a valid URL, then I would expect it to appear on the list. I can't read the user's mind as to whether the text should be treated as a URL or not.


Honestly, I'm a bit rusty but I feel like I could've done this project in C# with WinForms in a bit over 2 hours. Not all the advanced features asked for, but the minimal viable product.

It's a shame that Microsoft is shifting away from the old, reliable WinForms for their new UWP system because making a quick and dirty implementation of many or these requirements would actually take very little time for many moderately experienced C# GUI programmers. It'd be little more than a tray icon with a list view log, a settings screen and a crude search bar, but it sure would be functional.

I don't know any other GUI framework that would allow me personally to do this in the 2 hours as suggested, though. Even with other batteries-included toolkits like Electron (ew) would you need to do all kinds of special searches to get things like clipboard and setting sync working right. Maybe Python and Qt could serve a similar purpose, though you'd probably to pull in a lot of packages to get clipboard monitoring and database storage working right.

I get the point, which is that simple ideas often have hidden, complex requirements that take a lot more time, but in this circumstance it's really more a lesson about how bad the state of GUI programming still is after all these years. VB6, despite its stupid programming language, is still the golden standard for a GUI application design framework for me and we've barely made anything equally usable.


Fortunately, UWP was deprecated recently. https://www.thurrott.com/dev/258377/microsoft-officially-dep...


WinForms is still very much supported. They even have a short segment in their VS2022 launch videos dedicated to WinForms development: https://www.youtube.com/watch?v=irfQczdVjRA


Only one way to find out.

1. download or use some screen recording software

2. Push record

3. Start coding

4. Upload your video to some site (youtube, etc.) when you're finished - or just live stream it.


> VB6, despite its stupid programming language, is still the golden standard for a GUI application design

I think the reason is that many programmers have an urge to do things in a stupidly hard and complex way to look competent.

Real programmers do not use GUI drag and drop design tools.


That's right, they use butterflies (or emacs).

Why shouldn't you design a GUI using a visual interface? Do you also write SVG's in notepad?


I should have written 'real programmers' in quotes.

I was referring to the classic joke chain mail about 'real programmers' punching flip switches on mainframes or using FORTRAN, not Pascal.


But real real programmers use whatever the hell gets the job done faster.


It's not necessary coding work for someone who intends to use the project themselves and can run

    tail -f /var/some-log-from-a-clipboard-daemon | grep big-tld-regex
They asked for a project idea and he gave them a product idea.


I would be quite happy with this, actually.

I wouldn't even bother with the tld regex and just settle for "http"

The part I'm missing is what to tail?


There is nothing to tail, because (FAFAIK) there are no clipboard events in Linux (well, in X). The best you can do is poll xclip every X ms, so you'd get:

  while sleep .2 ; do xclip -o ; done | uniq | grep -E 'https?:'
but that is pretty wasteful of resources. Luckily processes are cheap on Linux :)


Thank you! I can't imagine myself clipping something more than once 5-10 seconds, so I can set the interval to that.

I want to take this moment to remind you that the Web includes http, not just https, and ignoring it is a failure at accessibility and compatibility.

For myself a week from now looking for this, here is the version which works for me:

      while true; do xclip -o | uniq | grep ^http > ~/url.txt ; sleep 5; done
Thank you again, this will make my life much easier.


Plasma/KDE's Klipper clipboard app has Actions that will do this, fwiw. http://linux-blog.org/make-klipper-work-for-you/.

You could probably do it very simply by remapping ctrl+c depending on your DE.


    import pyperclip
    import re

    url_regex = re.compile('^(ftp|https?)')

    while True:
        text = pyperclip.waitForNewPaste()
        if url_regex.match(text):
            print(text)
Seems like a clean, blocking API. I peeked at the source code. It's polling at 10ms intervals under the hood.

... LOL.


A GUI this simple is that hard. Not compared to learning some of the foibles of CLI programming. What the hell is a stderr and why is >| giving me an error? Reading man pages is a skill unto itself. For those raised on Windows with Visual Studio (not VSCode, but OG Visual Studio), the utility's GUI is quick and easy. Definitely within the 4 hours window if you yolo design a GUI. A button for enable/disable, and a config option for which file to dump the URLs to. How each student decides the UX for the GUI should be is up to them, but that's flexibility in the assignment at the university level, and shouldn't be a profession experience in project management.


Good project management would prioritize the feature list, cut out 95% of the effort to define an MVP that could be delivered with 2-4 hours of coding, and delivered to the user to find out if it's worth doing anything more.


I could write it to properly log it to a user file, with reasonable default limits and take some of those as command line parameters etc, in a few hours.

Running the program at start isn't programming, it is putting it in a "run on start" location, helping users do that is a part of managing the project and not programming.

Figuring out what settings a user would want for it is also project management and not programming.

Starting/stopping is just starting and stopping the program, every OS already provides that functionality. Maybe someone would want a better UX, but then you are doing UX work and not programming work.

So I don't really see it, almost all of the "gotcha" cases he talked about aren't programming problems.


But I didn’t give the students a programming problem. I gave them a software project to solve a problem I have, which involves UX and PM work along with some coding.


Right, but was that what the students asked for? I doubt they thought that they could launch a successful product in 2 hours worth of work.


I'm sorry but I gotta call this out; this is just peak HN arrogance. All we have is an article relating an anecdote and the reaction to it is to second-guess the author's actions/interpretation of events. Surely, the author has more context of what was actually happening, what was actually being asked for?

The actual words used were already vague enough ("ideas for a software project"). The answer to that would always be subjective. Why is there no trust in the author that he has a better read of the room? He definitely has more context of what the students were actually asking for. These are also, students; I'd give the author benefit of the doubt that he gave them a problem that would be most productive use of their time given their skill level. Maybe that's why he didn't consider web browsers or bug trackers.

FWIW, I think the article makes a good point but is poorly illustrated, especially for this audience. I've no doubt your average HN commenter has projects/scripts under their belt more complicated than a clipboard logger, maybe even on far tighter time constraints. But c'mon, these are students! If you work in the industry there will definitely be different causes for project under-estimation.


But the author gave a project idea where most of the project was figuring out UX concerns of non existing users. It is a horrible project no matter how you view it. If you do UX then you need users to actually test what you write on. If you don't do UX then this project is very easy. Sure these students might not have done it in 4 hours, but not because they would get stuck on whatever the author talked about, but because they would try to look up API documentation for whatever OS they work with or bugs in their URL detection code.

And since the author teaches software engineering classes and not UX classes I assume this would be a software engineering project and not a UX project. Building out a lot of UX features without users is an anti pattern, better write a bare bones implementation first (start from command line, help first users set it up and see how they like it) and then work from there. For a hobby project that would likely be the end of the project, and a good student could get it in a few hours I'm sure.


I think I have to agree about the "peak HN arrogance" bit

Most things about software dev are never taught to college students. This is actually an excellent project to teach students beyond programming for computers and building things for humans. Not thinking about UX is what leads to most software projects failing because we are fundamentally different from computers (do I even need to explain this?)

I wish I had teachers teaching me stuff like this in my software dev classes instead of the usual stuff. I wouldn't have had to waste years trying and failing to learn this


> I wish I had teachers teaching me stuff like this in my software dev classes instead of the usual stuff.

I think this would be a great class project with the teacher acting as a fictive user. I think it is a horrible hobby project to suggest.


This situation says a lot about attitudes to development in general. "It's just a hobby project, who cares?" is fine until it's useful enough to end up in production code without being production quality.

Most of UNIX literally seems to have been built like this.


> But the author gave a project idea where most of the project was figuring out UX concerns of non existing users.

> If you do UX then you need users to actually test what you write on.

No offense to azhenley but why do you expect any different? They asked a CS professor for a project idea. How many CS professors have project ideas that come with UX case-studies? How many CS professors have the resources to guide students to coordinate actual UX tests outside of a research grant?

It's an outside-of-class project. It doesn't need a compelling use-case.

> It is a horrible project no matter how you view it.

Too harsh, this has yet pedagogical value. Because, you know, they are students. Bulk (if not all) of the work you typically do at school teaches you things even if they are not portfolio-worthy.

> Building out a lot of UX features without users is an anti pattern, better write a bare bones implementation first and then work from there.

I agree with the thought but, again, this isn't real-world software engineering. I feel I can't emphasize this enough. Maybe these students have been writing from-stdlib-up CLI tools ever since and their teacher thought it's time to expose them to APIs other than stdlib. Who knows.

Might as well raise pitchforks that finding the most valuable customer for Northwind Traders (LLC) is counter-productive because the database is outdated. Better gather data first, why not normalize the schema while you are at it, so you can stream it to the cloud and get a better analysis than plain SQL. (Calm down, Northwind is a sample database for education purposes.)


Yeah, I think I kind of lost the thread after a few levels here. The original point was that the student estimate wasn't that wrong for a hobby definition of getting a project done.


The content of the anecdote related implies a disconnect between what the students' and the authors' understanding of the situation. This seems like pretty good reason to believe that the author was plausibly wrong about what (at least some of) the students were asking for.

People who write blog posts on the internet are not exactly immune to poor communication.


> Running the program at start isn't programming, it is putting it in a "run on start" location, helping users do that is a part of managing the project and not programming.

I'm confused by this. Any program that doesn't automatically set up this 'autostart' capability in an OS/architecture independent manner is much more of scriptkiddie solution than a real program.

If a program isn't made to run on any OS and any architecture without a multistep instruction set, it isn't really useful to many people at all. (And if your solution to this is Docker, re evaluate if you want anyone other than a handful of devs to use it)


This was supposed to be a student hobby project, not a company product release. You are thinking of this problem from the perspective of an enterprise programmer, not of a hobbyist enthusiast.


I’m baffled by the level of offence you’re taking at the idea that this is all part of a programmer’s job. Nobody needs to go to college to become a hobbyist programmer, it serves students very poorly to treat them that way. It’s the same reason we teach algorithms instead of just how to google. Code doesn’t exist in a vacuum.


> I’m baffled by the level of offence you’re taking at the idea that this is all part of a programmer’s job.

This is a strawman, I never said these things aren't a part of a programmer's job.

> Nobody needs to go to college to become a hobbyist programmer

Nobody needs to do enterprise programming on hobby projects, you learn that on the job. For a hobby project it is more important that the project is fun, and very few find enterprise coding fun.


The opposite is actually true.

The enterprise programmer thinks nothing of deploying some J2EE monstrosity with "simple" setup scripts and "just change this Windows parameter in registry" and "run this command to create a service to start and stop from Services.msc". That's what they live and breath, that's the bar they are happy to clear.

The hobbyist enthusiast, on the other hand, will only have knowledge of actual consumer-grade software that "just works", so he will be at pains to make installation and execution simple and straightforward. He will take pride in every slick "just works" capability he can add, every tiny bit of OS integration he can squeeze, every line of setup-requirements documentation that can be removed.

One of the problems in the development of modern programming is that half the hobbyists are eventually horrified by the amount of effort it takes to develop on "modern" platforms and principles, and just give up; while the other half fall in love with such complexity, and happily proceed to add more, making the situation worse for the next generation. So development practices get harder and harder for no real reason.


> So development practices get harder and harder for no real reason.

I think it's even worse than that. I think that any time there is a widely adopted platform which is easy to customize without requiring any significant level of skill, there is some subset of people which churns out very simple tools to do popular things well enough, and stuffs those tools with adware / malware / spyware to make money. The platform vendor responds to this flood of crapware by adding new checks and processes to ensure software quality and security, and keeps doing so as long as quality and security is a problem. At a certain point, the platform in question is no longer the easiest target, and the people creating the crapware switch to the next platform, and the formerly-easy-to-deploy-to platform is now a complicated mess to write anything for.


This is probably the most accurate summary of the problems in modern development I have ever read. You're spot on. Thank you.


They are studying to become professional programmers. I think the enterprise perspective is correct, you don't want grad students with a hobbyist mindset.


I doubt you find many grad students who spends their free time writing enterprise projects. If this was a class project, sure, but this was for something they would do on their free time for fun.


Everytime a professor has said "do this in your free time", they've really meant that it's an ungraded class project that you need to do to the same high standards. Unfortunately, you get less guidance and zero feedback, but if you don't push yourself to do well on your own you're going to fail the up coming projects that implicitly depended on your "free time" project.

In some ways, school can be more insidious in its demands than enterprise.


At that time, I enjoyed writing software only I would ever use. I can set up auto start myself ; I can log with CSV format and write a tool later to process it ; I can hardcode my exclusion list. I would still get enjoyment from the project doing all of those things. And using the clipboard API could open up my next project idea.

Implementing every feature the Prof listed sounds like something I would only do for money.

Even professionally, there is still use for code that only ever gets run by its original author.


> I think this person misunderstood the students, the student wanted a project to practice programming,

I think you misunderstood the students. The person whose was actually present at the time the words were spoken says the students wanted ideas for a “software project”, not some extremely minimalistically defined “programming problem” It’s completely reasonable to assume they wanted to practice real world software development, not just how to implement the most pedantic set of requirements possible.


A software project could be "write a web browser" or "write a bug tracker", those are much better since the interface is then an inherent part of the problem to be solved.

> It’s completely reasonable to assume they wanted to practice real world software development,

I've written real world command line utilities used by other programmers. It takes like an hour, I write the code and document how to use it in a readme, then if they have questions they just ask me. A lot of the time trying to do something complex big design up front kinda thing as suggested in this article is just a waste of time. Fiddling with UX on a project that will only ever get used by a few programmers is a waste of time.


> A lot of the time trying to do something complex big design up front kinda thing as suggested in this article is just a waste of time.

This is one of those things that comes with experience. A junior programmer will make a meal of the simplest task. An experienced developer will, if possible, interpret the problem in a way that makes the implementation trivial while still solving the user's problem. Sometimes this doesn't actually involve writing any code at all.


Really, I think it was more about showing off how much smarter and knowledgable than their students they are. "My students think it'll only take this long but I've got a bunch of requirements that I never mentioned that make this harder. For example, the no logging feature, that wouldn't get used. Encryption feature, pointless. Syncing with the cloud? Jesus, at this point you're talking about a fully fledge product that wouldn't only add these after multiple rounds of user conversations.


Agreed. Very annoying and condescending post. Typical of a high priest (or academics as we like to call them these days).


The minimum viable product would take 2h-4h. A professor shouldn't be teaching students to prematurely optimize or to gold-plate their first effort.

Deliver version 1 before starting version 2.


I've often felt the difference between a junior developer and a senior developer is that a junior developer believes anything that isn't in-the-seat, headphones on, IDE cranking isn't "real development".


But you need some level of technical skills before you should start worrying about user concerns, so junior engineers should first and foremost think about the technical parts while the seniors who are more skilled can take a bigger picture view.


Sounds like a good way to ensure your juniors never grow.


Why wouldn't they grow? Once a junior has good technical skills they can start worrying about user concerns and therefore grow in other areas. I see no reason why this would limit them in any way.


I agree with you with a tangential perspective.

I think any non-trivial task is going to surface technical limitations which might feed back into the stakeholder decision making process.

A junior might be alienated if they are outside that process, but if they approach it with a problem-solving mindset they'll likely have a more intimate knowledge of the closed loop.


I’d point out this isn’t a problem with estimation per se but with requirement gathering. The students didn’t press him for more specific requirements before offering an estimate. The real challenge though comes when the customer doesn’t know their requirements. They legitimately think they “just want a simple app to record urls”, but actually want all the other things listed. The thing that sets good developers apart is being able to tease out those requirements without being a jerk or making legitimately simple requests overly complicated, then deliver.


Good developers, yes. Which takes experience, given the myriad of colorful personalities, politics and more. Though I don't think it is fair the responsibility is largely put on the developer for "underestimating" when often the corporate is big enough to have specific people gather these requirements and map them out as part of their day-to-day.

That's my main problem with this advice and the article. Of course anyone wants a developer who can do both development and figure people out to bring out the most perfect estimates possible. But it's getting to a point developers are starting to be the solely responsible people on top of ever increasing technical requirements, while being given the minimum amount of practice figuring all this stuff out.


My spouse is in healthcare, and at least in the US, it seems to be a similar situation.

There’s a big medical structure. And in theory, there’s lots of folks to deal with XYZ. In practice, docs need to do YZ because they are the ones who have the right combination of training and authorization to complete the task. When more people get hired, the doc just has to communicate YZ to more people. Either the task requires nuance that is missed if you’re not the doc, or the task requires direct doc authorization at various points regardless. Doesn’t decrease their burden at all, even though there’s more people working on YZ.


I think this is a great analogy.

Creating a game of telephone between the stakeholders and the people developing software is often going to impede communication.

The flip side of this is that a lot of technical people lack adequate soft skills to be able to do this requirements gathering well (probably the same can be said about a lot of doctors and what defines a "good" doctor).


Half the time the devs don't even underestimate, it's just that they're the easiest to blame them because it'll never hurt them (according to the people shifting the blame).

One time, I had to estimate a UI migration. I made the estimate and added the disclaimer that the estimation assumed X was true and Y and Z were provided. Of course, X wasn't true, because the supplier sucked and Y and Z weren't provided by the business. So it ended up taking way more time than estimated.

Guess who got the blame anyway.


It's even worse when you get pressured into giving unrealistic estimates, or even get overruled and have your estimate ignored, and then get blamed when the project runs over the estimated time.

If that ever happens to you, don't wait around and suffer. Start looking for another job and save your sanity.


You deliver with the minimum you can get away with then improve from actual feedbacks, requests and data gathered from usage.

What people think they will want differs from what people will actually want which is not necessarily what data shows you should build. Build fast, improve incrementally.

The path highlighted in the article is exactly how you should not build a product. Don’t focus on all the things you could add, streamline your ideas as much as you can.


You're right - I think this is the only real way out of the dilemma. There are still some challenges with this approach, though:

* stakeholders often need or think they need a whole estimate up front. A developer may explain MVPs, sprints, agile, etc. and still get blank stares and "ok, that sounds good, but just give us a rough estimate how long will it take/how much will it cost. We promise not to hold you to it (yeah right)". In some examples, like governmental budgeting, etc. it can be hard to even proceed through purchasing without a whole-product estimate.

* developers can be seen as over simplifying by stakeholders who don't understand the approach. When presented with a MVP, they start complaining about the fonts, etc. and loose faith in proceeding.

* developers can be seen as over complicating by stakeholders who don't understand the approach. When a stakeholder's simple request for "just an app to capture urls" balloons into a major project with bells and whistles over 10 weeks of dev cycles, management steps in and says "I thought this was just supposed to be a simple app to capture urls".

The later two come down to developers communicating effectively and stakeholders truly understanding their staff and processes. The first, I honestly haven't found a great fix for. No matter how I explain, many customers generally want an estimate and budget I stick to.


> The real challenge though comes when the customer doesn’t know their requirements

Which, of course, is true 80-90% of the time.


Fair but this is the thing we all do wrong with estimates. I've come to the following system that I think works well and captures the uncertainty.

Estimates are given as the following choices: hours, days, weeks, months, quarters. You never give a number tp the units. These all mean some "small number of X" where X is the list above. Then if Product has a question like, "Why would Z take weeks?" then you can have a discussion about the complexities and the task can be further refined and/or split into multiple things and those can be estimated as above, rinse and repeat.


You can do feature creep for a long time. If you just want a basic working version (that will probably drain your battery), that can be done really quickly.

This took me ~10 minutes:

    #!/bin/sh
    set -e
    while true; do
        URLS="$(xsel | grep -Eo "(http|https)://[a-zA-Z0-9./?=_%:-]*")" &&
        if [[ ! "${URLS}" = "${OLD_URLS}" ]]; then
            echo "${URLS}" >> urls.txt
            OLD_URLS="${URLS}"
        fi
        sleep 1
    done;


Isn't the >> going to cause urls.txt to grow geometrically?

eg, copying http://example.com/foo and then http://example.com/bar and then http://example.com/baz I think urls.txt would look like:

    http://example.com/foo
    http://example.com/foo
    http://example.com/bar
    http://example.com/foo
    http://example.com/bar
    http://example.com/baz
EDIT: I stand corrected. I should have run the code instead of mentally executing it.

One issue I did find when I ran it. The shebang should be:

    #!/bin/bash
as [[ is a bash built-in. I'm on Ubuntu - maybe this works on Mac as /bin/sh is an alias to /bin/bash?



Doesn't look like it. The URLS variable could more appropriately be named CURRENT_URL. Singular, not plural.


The user might copy a block of text with multiple URLs, all of which get pulled out by the grep.


Nice example, the problem with the article is the client(teacher) doesn’t know what they want in this case. It’s a good example of how feature creep and project scoping is important but if all you really want is a script to log urls that’s simple. Building a url logger with top notch security practices, cross platform, simple enough for my mom to use would be entirely different.


Using clipnotify[0], one could fix the busy-wait issue pretty easily.

[0] https://github.com/cdown/clipnotify


Agree! Furthermore the reason we have such bloated and garbage software is a combination of this kind of feature creep and a complete lack of modularity.

Want pause functionality: write a small utility to provide an easy way to start and kill any program.

Want notifications: write a program to monitor any file and provide notifications when something it added to it.

Searching: grep or a GUI equivalent

Syncing: rsync/syncthing/Dropbox…

Someone will complain most people can’t use these tools. Firstly that wasn’t the initial question, and secondly that’s something I’d love to see tackled in a GUI tool. I don’t think it’s impossible, just misaligned with the incentives of people who make and overcharge for proprietary software.


>The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.

- Tom Cargill, Bell Labs


Which is why you should ensure the first 90% is as little work as possible. Trying to take the other 90% into account from the start will just create yet another 90% later as all your original assumptions turns out to be wrong or mesh poorly with each other.

Some things are important to plan since they are hard to change, like user data or API boundaries, but other than that simple is usually the best to start with.


This adage forgets the last 90% of development time required after the software is released.


I really wish this weren't so true....


Why? This is true in most processes that involve building things. It doesn’t take very long to finish the foundation and frame of a house either, but making it liveable takes quite a while.

When we build a new city hall (under time and budget by the way), the actual building was finished after a year. It took another two years to make it useable as a city hall.

Why would software engineering be any different? The only real difference is that you have to do all the parts rather than specialising in brick laying, electrical stuff, plumbing and so on. Yes, this is really how little I know about construction and I still managed all the parts of the project concerning IT and networking (without really knowing anything about the instalment sides of that beyond the blueprint and trusting my operations guys either).

So maybe I’ve talked myself into agreeing with you, but perhaps not for the same reason. Because I wish it wasn’t true either, but mainly because I wish the software development industry wasn’t still so infantile that it still can’t predict and distribute the workload in a manner so that projects can be delivered under time and budget more often than not.


It's predictable that the walls/ceiling of a building will take X time, and the wiring/furnishings Y time. Software development is different because even after you have allotted time for every issue you can imagine, unexpected further issues end up appearing which delay you beyond the projected deadline.


Was curious if I could pull this off, and managed to get a Mac version "working" at an "acceptable" (there isn't even a quit button yet) level to serve as a starting place.

Was embarrassed by the quality, since it's just 5-6 snippets (of various style) pasted together...but I might as well share it anyway to prove the point: https://github.com/owenfi/clipboardcopies

The article does strike me as "scope creep" or missing the point of the question...I've taken this super crappy version and added it to my startup items, I think I might find it useful, and if I do it might grow over time. The polish can happen later if needed. I was able to accommodate a bit of the polish out of the gate by saving to individual files based on the time. The granularity roughly corresponds with the polling frequency. (Turns out there isn't a clipboard did change notification?)

To me the "complexities" are those things lurking beneath the surface, that a newbie doesn't know how to resolve, and an expert forgets to even point out. In this case the complexities I saw were: - Getting it to run required revoking and updating my provisioning certification (could've been solved another way I'm sure), this was fast because I've done it plenty of times in the past but things like this often throw up unnecessary roadblocks that aren't foreseen in the scope. (luckily I already had Xcode installed.) - Writing to the directory I wanted (~/.clipboards) proved futile due to sandboxing, and after 5-10 minutes I just gave up, since I want to consume this in terminal I can just cd into the app bundle documents area (or fix later). - How do you get git to merge up 2 repositories that were started independently (maybe I should've started from github in the first place, but I didn't think I would be saving or sharing it).

I'm not saying it's not worth thinking those things through or practicing project management...just that the first student was right.


I think interpreting the intention of the author and the students should take into account that the author's research is in human-computer interaction and that the students most likely are aware of that fact as well.


that's fair, and maybe if they wanted a "project" then project management was the right thing to look at.


But a project manager would allocate 2 hours just for the initial project discussion meeting, so 2 hours and 4 hours are obviously not project manager estimates.


I can write an OS in just 2 hours. Just a few lines of code which can be booted straight on the hardware an sit there doing nothing. It is an OS? It is. It is useful? It is not.


It is very useful if you want to learn how to program OSes.


I want to see that done on modern x86-64. Make a livestream or something!


You might enjoy this series of blog posts. I went through them last year and learned a lot!

https://os.phil-opp.com/


This post ignores the fact that an answer to questions about development time depends on the technology used, i.e., the tooling, programming language and platform/supported operating systems and, first and foremost, how familiar the developers are with these. I believe the student who said it's going to take them 2 hours. For example, I can guarantee you that I would have been able to finish this project in less than an hour back in the time when I was fluent with Realbasic. Today, it would probably take me 4 hours in Go + Fyne because I'm not yet familiar with Fyne. It would take me the same amount of time in Gtk because the framework is kind of complicated and the docs are sometimes a bit obscure (though overall excellent). If you add testing, documentation, and deployment then it's going to take at least 10 x the original time. That wasn't part of the original description, though.

The example just goes to show that if your original spec is incomplete, doesn't specify the quality, testing, and documentation requirements, and you pile up feature after feature (like cloud uploading), then it's going to take much longer. I'm sure the student knew that, however.


The other thing is, how well researched do you want to be? When I'm learning something new, I usually want to gain a deep understanding and really get to the ins and outs of it.

So when figuring out (for example) how to do X in language Y, I'm not very likely to stop at the first SO answer and call it done, because very often there's more to it (e.g. tradeoffs in performance, compatibility, composability, pitfalls & caveats, etc.). But the difference it makes in time taken can be massive. Maybe finding that first SO answer takes a minute, but gaining a deep understanding can take a few hours of reading and experimenting.

This ends up feeling weird for these take home assignments if you're using a new stack. My normal mode of operation is to gain deep understanding (and people generally consider me insightful, which I attribute to that pursuit of understanding), but this may look bad to someone who already knows the stack and/or thinks that you should just cope & paste.


A good friend was working in a scientific model for controlling weeds on farms. He couldn't code so he asked me to make it into a program: "it is very simple, only 8 steps at most". The 8 steps turned out to be a 2000 lines code after 6 months of work. Later he changed the model a few times and added a few new steps. We could only finish the prototype 16 months after starting, and after a few rewrites, with 3500 lines. Now we're on our way to launch. The model was very complex itself, and each of those steps was a rabbit hole. To add to that, I was not a professional programmer and had never touched Python, so it was a very challenging journey.


Just curious - why would you use python if you don't know it?


I did a teamtreehouse or codeacademy course on Python maybe 6 or 7 years ago, but that was all - not even a single project. Mainly my programming experience existed because when I was a teenager I would create scripts for ultima online using EUO [0] and a few times Simba[1]. Also I managed an ecommerce operation for a few years, so I dealt with coders and eventually code.

Actually, I started this project in VBA after my partner insisted on it because he had those sheets with all the parameters for the model already in excel, and also a GUI he had created for the input. I accepted because I had taken a course on it @ uni, but soon felt the need for an easier way to program, and that's when I decided to go to Python.

0: http://www.easyuo.com/ 1: https://wizzup.org/simba/


Fair enough, it sounded a bit like you learned python specifically for this project.


My bad for not being precise: I had to learn Python specifically for this project because the only memory I had about the course was: "Python bare basics were easy to learn". I had never even installed python locally or anything and saw those courses at the time as only games to get some minimum idea about the language, as I had no intended use for it.

Adding to this topic, now I can say it was easy enough, even for someone like me. A few times I needed help from actual people and got it from the kind and amazing folks that hang around StackOverflow Python chat, but mostly Python docs (or some simplified form of it provided by other website) and google queries would do the job when I had a code obstacle. Python was overall very intuitive for me, apart from a few surprises (mainly regarding scopes and names).


Honestly I learned this lesson as a young engineer, as I suspect many others did, by giving a project manager an off the cuff answer to "how long will this take".

The next month of embarrassingly explaining why I was late every Tuesday and Thursday was enough for me to now take a serious amount of time, come up with a realistic estimate, and then add a lot of buffer. Better to under promise and over deliver.


There's an old saying that you can tell developer maturity by the units of time they use for estimating code.

Only brand-new developers (and Sith?) estimate things in hours. Experienced devs estimate in days, but senior developers won't give you a number in increments smaller than weeks.

Also, any manager who trusts an estimate from a new employee or especially a new developer is an ass. They should have known better, and giving you grief about it is just doubling down on a category error. We've all had that experience, but I'd like it if we all learned to reframe such experiences as abusive. Especially if you're going to be in a position to repeat those sorts of experiences on a new generation. Break the cycle of stupid.


> They should have known better

In most of these cases, I actually suspect malice over ignorance. They figure they can squeeze some unpaid overtime out of an unsuspecting junior by pressuring them to "commit" to an unreasonable deadline and then trying to hold them to it.


I had a shocking realization one day that the incompetent PO I was dealing with had not bumbled his way into that position but been placed there on purpose by his manager.

There are certain brands of incompetence that get results, and you don't have to be the one that is aware of that fact to reap the rewards of manipulating people in that way.


Only brand-new developers (and Sith?) estimate things in hours. Experienced devs estimate in days, but senior developers won't give you a number in increments smaller than weeks

That just reflects the size and complexity of stories they're asked to estimate.


I think it has a lot more to do with how far away from academia they are. I side job as an external examination for CS students, and the way they are taught to do estimates are some of the most useless ways imaginable.

Have you ever actually seen something like a planning poker session in action amongst only juniors? I have, and while this is very anecdotal, it has always played out like this: the most confident programmer in the group grossly underestimated the task. Often this person has a valid reason to be confident, but because of this, their opinion also weighs much more heavily in the group, so that even if someone throws the 100 card, the task will still end up with a 2.

The longer you work in the real world, the more you realise that estimates aren’t a showcase of you. They are timeline project managers need to implement projects, and if you fail to give a realistic estimate then you fuck with that timeline which is often the worst thing you can possibly do as a software developer.

Even if you “know” how to implement a really simple task, you simply need to make sure there is room for a days worth of searching for something really stupid, and you’re never going to be asked to estimate something really simple. Not as a junior, not as an experienced developer and not as a senior.

I do understand why students are taught to estimate wrongly though. Their projects aren’t very long. But I personally think we are doing them a huge disfavour by making them estimate in hours. Because why wouldn’t they expect that to be the norm if that’s what they’ve learned to do?


> That just reflects the size and complexity of stories they're asked to estimate.

In any reasonably complex application even the smallest task will likely not be available to anyone in under a week anyway. Need to change a button? Sure, that's an hour or two to change it and ensure it didn't break some other scenario. Maybe that takes another day or so to verify with team X or dev Y in a PR. Wait for tests and processing for release to QA environment. Maybe another roundtrip if something is found, at which point you might be working on another button. Then it sits for a while until it can catch a production push.

I'm also assuming that the senior dev is working in a fairly large machine. If it's a startup then sure, anything goes depending on what shortcuts are in place.


This doesn't always work. I've had managers tell me that I was supposed to give a "real" estimate and we'd figure out the buffer together. Obviously it led to me working late hours to meet an unreasonable deadline.


This sounds more like a problem of scope creep than unexpected complexity


Scope creep happens when you discover new requirements after defining a project scope and delivering an estimate. The type of analysis outlined in the article is helpful to define a sane and reasonable scope upfront, to hopefully avoid creep down the line.


> Scope creep happens when you discover new requirements after defining a project scope and delivering an estimate.

The author defined project scope. Student discussed and came up with 4 hour estimate. Then after that estimate was made the author added a lot of requirements and said the student estimate was too low.


It's in the article:

> The goal isn't to fall into a feature rabbit hole, but rather to understand your assumptions about the project. If I don't do this exercise and I try to build it, then I will often run into two or three crux issues that I could have easily known about from the start. Don't go implement every feature idea you come up with from this exercise!


This is only true if you believe software development to be the practice of slavishly implementing ill-defined requirements regardless of what the user actually wants.


I prefer to think of it as pushing out minimum viable products as soon as possible and implementing finer details afterwards. I think that's a principle of being agile.


Encryption? A GUI for pausing rather than just stopping the process when you don't want it? Server synchronisation?

I can see why this project will take longer than 2 hours if you introduce new requirements.

I have a bash alias that almost solves the problem as stated. I'm not on my PC at the moment, but it's something like (forgive formatting on mobile, space inserts keyboard suggestions so even this much indentation involves a bunch of copy-pasting).

    while :; do
        x="$(xclip -out)"
        if [ "$x" != "$lastx" ]; then echo "$x"; fi
        lastx="$x"
        sleep 0.5
    done
If you pipe that function (I call it whilepaste) into |grep https?://, you've solved the problem as stated. If you want more, if it's not just for yourself (the author literally said they just wanted the program to use for themselves) and it needs further usability features, automatic run on startup, etc., that's a different set of requirements that you apparently didn't mention when the student posed the clarifying questions. But even so, making a tray icon and automatic startup on windows is probably approximately two hours if you're experienced in coding and just need to look those two things up. I remember doing both as a teenager back in the game maker 7 days using a DLL, so I have a vague memory of how much work this was.

I don't disagree with the general point that some problems are deceptively hard and that, if you're making this for someone else (even this tech-savvy coding teacher), you'll probably spend an afternoon with polishing and extended testing included, and that gets longer the bigger the initial estimation (the famous one-weekend project). Still, a problem like replicating Twitter is probably a better example of something that seems very simple and the amount of time needed only becomes clear once you realize how much work it is to setup all the stuff around the core function of showing short messages to followers.


I have been working on a side project that allows saving and sharing bookmarks with a simple like / dislike / comment system. It looks very simple, and is essentially a CRUD app. It's rewriting a similar project that had.

That turned out to have taken over a hundred hours of my time, and still counting. I started in June, expecting to finish by August. It's November now and I'm still at only 80%. There are just too many details that can't be predicted before starting to work on the specific part.

This happens at work too - "can you finish that feature now and release that in the next 15 mins or so"? Sorry, but I've only been a maintainer of this massive codebase for a few months, and haven't even touched that part of the program...? And yeah, I have other pressing work that I'm focusing on, so which one should wait?


Great example. This is something I struggle with myself, but it especially annoys me when my non-engineer friends complain about bugs or missing features when we game together.

“Why can’t I just do {x}, it would be easy to implement, any programmer could do it, why are {y company}’s developers so incompetent”.

Now I have a link to send them rather than awkwardly coming up with an example on the spot of why {x} is probably way more complicated than it seems, assuming you want it to actually work in any meaningful way.


I've found that asking non-engineers to explain to me how to make a cup of tea can be helpful in explaining why software development is so complex.

They'll usually say something like "boil the kettle, pour the water in the cup and wait a few minutes"

So then you go in and probe all the requirements and edge cases:

- what if there's no water in the kettle?

- what if there's no power to the kettle?

- what if we're out of tea bags?

- how do I know when the kettle is boiled?

- can I put the tea bag in the cup while the kettle boils, or do I need to wait for the kettle to be ready first?

- what size cup do I need?

- how do I remove the tea bag?

- what if there are no clean spoons?


I was handed a chart from our shipping manager. It had columns for each state, and shipping prices based on quantity. This was a simple project to turn into a table and then lookup the needed value. When the manager needed to update the prices, they could just update the table.

I noticed the 3rd column for quantity was "3 or more". I should have ran at that point. The next page of the spreadsheet added exceptions for certain item #'s, certain zip codes (each borough of NYC for example). It was still manageable but it was way past the two hour point.

Sadly, when this was originally put on the AS/400 it was 4,000 lines of IF statements since the original developer didn't put it in a database. It probably started out as a hack that said "for a quantity of 2, let's offer a discount on shipping", and then later someone said "If we're shipping to states on the west coast, use this price"...etc


The poisonous version of this is "This should take no time, I did a POC myself in a day".

Happened when I helped hire a friend to help with a trading project. This friend is simply one of the best coders out there, understands the domain, understands all the coding. The guy who was hiring him threw together a websocket, pulled some data, and thought there's just a tiny bit of work left in the details. He did it in nodejs.

Of course figuring out the details was the hard part, made especially hard by not having a clear request for what the project was going to do. Was perf important? What about all the various features?

It's really something to look out for as a manager. Don't be that guy who says everything is easy.


Interestingly, humans tend to underestimate complexity even when it's right under their noses. In music world you can see it everywhere with your average band attempting cover versions of popular songs. It's the reason the results often sound nothing like the originals: melodies vastly simplified, textural sounds are omitted, harmonizing melodies absent, rhythm parts dumbed down, timbres approximated. "Yeah, it's a simple song with 4 chords, we can rehearse it in a day or two".

When your ear develops, you can hear that the real simple song is an outlier.


I've found that nearly 100% of companies that say their take home technicals only take "2 hours" actually take much longer. For anything to only take 2 hours you are going to need some prior knowledge and for no roadblocks to come up. Debugging something simple can be a 30 minute detour.


anyone else feeling like lately at work they are spending more time on dialing in time estimates for work than actually doing the work? it feels like we have a problem (at least in sysadmin/devops roles) of valuing time estimates/"the process"/agile over actually banging out some work. i am starting to tune out planning meetings and requests for time estimates with a least effort possible attitude.


I've had this problem before, it's a symptom of management not trusting development teams.

It's worse when combined with the "we say estimates aren't timeframes but secretly they are" problem: give a 3 day estimate for something, and when it's not finished in 3 calendar days (because only around 1/4 of that time was actually usable for work, the rest was pointless meetings or 20 minute gaps where nothing useful can be done), more meetings are scheduled to discuss why the estimates are "wrong".


For some bizarre reason, management has not yet assigned a task to their programmer underlings to automated themselves out of existence. I can't imagine why.

I don't think it's a matter of trust. Management sees their job as setting priorities and resource allocation. So they shoehorn that into everything they touch. They require estimates, so they can divide impact by effort and then assign the highest ratios first, without regard to necessity or dependencies or technical debt.


This same effect is similar when given technical interviews. As someone who has given hundreds of technical interviews, when you design your coding task, give the candidate 3x the time it takes you (or someone else of similar leveling as the candidate) to do. The problem should also be based on a real thing the team has done and then distilled down to remove as much work-context as possible.

So a 2hr assignment should generally take the interviewer about 40min max. And an interviewer should never hand out a question or task they themselves have not solved.

This rule is generally applicable in in-person interviews where candidates usually have 45m-1hr and are nervous and on-the-spot. The question should take the interviewer about 15 minutes, and don't forget to have time to talk through their solution and give time at the end for their questions.

And the last piece of interviewer advice for now: look for reasons to hire the person. Identify what would this person add to the team and company. It is easy to find flaws in everyone.

Anyway, tangential to the article but relevant enough to share I think.


Great point, and a good way of illustrating it.

But I have to respond: Or you could grab something similar and use that as a beginning point. Say, for this application, `clipman` which gives me a popup list of the last few clipboard entries. Get into that and add some regexp filter and log output and it need not even interfere with the original functionality.


I think that both article and you are missing a very big aspect of development - ongoing support. This is specially problematic when using existing code to create new product for a specific use case\client. It may not take very long but what everyone seems to forget is that this new product has to be maintained. Every time new feature is added to base code, it has to be tested with that new product as well. Repeat that 10-15 times and it becomes difficult if not impossible to add new feature to base code without either breaking some downstream code or getting bogged in updating numerous programs and then testing and then fixing the issues then testing again...


"The new product has to be maintained" ... If i'm the only one using it, i kindof assume that burden implicitly.

One of the great ideas in open source is that others can use my software without me having a "maintenance burden" for their use. They can maintain their own copy if they like, depend on me, or decide shit don't work.

If I did my proposed hack i doubt I'd bother the upstream developers with it; if dozens of users like it, maybe later it could be made worthy of further public consideration. But "is the maintenance burden too high" as a consideration before you begin writing is a great excuse to never try.


Why can't simple programs these end up being just finished forever?


I think that this is a valuable lesson for CS students. I used to underestimate time for the same reasons these students did. The core technology (in this case, listening for the clipboard events) fits in the estimate. Making a product that can actually be useful will take much longer.

Estimating effort is important. Work is an investment, so the decision to invest changes with the amount of work.

I think this lesson can only be learned the hard way: create a new application and try to get users. You’ll learn where you spend time and get better at estimating. You’ll also appreciate why “simple” features appear slowly in software you use.


The overall gist matched my novice programming experience— you underestimate all the features and edge cases you’ll need to cover, at least I do. Butttt I don’t like the example. The base url recording is a two hour task for a given os. If you want regex then use grep. If you want timestamps use local epoch time and get on with your life. Format with csv. Forgo the icon, just put it in a daemon/service/whateveryou can stop/start. If you want context in the csv add in a curl get and grep out the title. Boom, you now have a product a cs professor can use to save URLs.


While back I applied to this PHP/Laravel job. I had to make a CRUD app and I did it, took me like 9 hrs, I made it... but I was a noob too back then regarding repos/how to share code. So I attach it in an email and it never made it... so days later I ask like "did you get my project?" realizing it didn't go through oh man.


See if you can reuse anything fr here

https://github.com/agilefreaks/winomni

This is the windows client for a cross device clopboard we worked a while back but dropped because we couldn't get users. Other components: https://github.com/agilefreaks/webomni https://github.com/agilefreaks/droidomni


This just highlights difference between a student project proof-of-concept and a product... and thus, you need to know what kind of software you're developing.

Monitoring the clipboard periodically, determining if there's a URL, and recording it somewhere if it is - that's easy. If it's a student project to show that something can be done, then you are DONE.

If it's a tool that you hope others will actually use, then yes, you need more.


Regarding the task, modern OSes offer clipboard history, so IDK if this would be any useful. Is it a random example of an app or something the author thinks they really need?


It's always two rules...

- 80/20. Yeah, much of it goes quicly. But the devil is in the details, and those add up, especially when you're staring into the abyss of an unknown.

- I learned this early on in my career. It applies to just about all projects, not only software.

"It'll take twice as long (and cost twice as much) as your original back of the envelope estimate."

I wish I had $20 for everytime I saw this come true. With that kinda money Bezos and Zuck would be working for me :)


I often say that I can write a fairly acceptable, full-featured app in a couple of days, but, if I want it to be a shipping app, then expect it to take a couple of months.

I believe that schedule crunches aren't just management's fault. They are often exacerbated by engineers, not "playing the tape through," and giving unrealistic estimates, or agreeing to unrealistic estimates.


When someone says this should be easy... I would like to show them the flow chart for how slack notification works. https://imgur.com/gallery/0p5bV

Just "notify the user"... imagine the amount of code and the number of APIs this needs to touch.

Software is way complex and devil is in the details.


The hardest pill to swallow is marginal difficulty. Everyone can agree that writing something like a modern full fledged browser is hard. But how hard is it to add a small feature to said browser? Well, the feature in isolation is easy, but the entire browser at that scale is much more complicated than the sum of its parts, so the difficulty comes from incrementing the complexity of the entire browser and not from how hard the feature itself was. Good code design can alleviate this a bit, but not entirely.


That crazy looking chart (except maybe the push timing loop at the bottom) can be implemented with 100 lines of branching if statements. The 20 or so questions it asks probably can be answered by querying ~3 simple APIs (user preferences, channel preferences, thread preferences). The only complicated thing is the push timing loop.


> The only complicated thing is the push timing loop.

... and producing the chart in the first place.


If someone actually builds this, you should just create a directory in the browser bookmarks and let the user use their standard browser (FF/Chrome) to actually browse the data. Maybe isolate it by YYYY-MM directory structure to make it easier to browse.

The important thing is that all of these automatically get synced and become searchable easily.

Good idea.


But OP didn't tell us if the students worked on the project, and how they found it, and how long it took them!


Only half on-topic: Many (non-Windows) desktop environments already provide a function to keep and search previous clipboard content: See for example https://userbase.kde.org/Klipper


I did that a fair bit when I was starting out and I've learned to be conservative. The problem is when I run into somebody else who insists that (I) should be able to do something in two hours because it's simple in their mind.



Yes, to the question at the end. Possibly shorter. Get the first iteration in with quick answers on the questions like should it be a csv or x and once it works then tweak it after instead of being paralysed with overthought first.


That looks like a really nice project idea, does anyone have something else they like giving the students/junior SWEs?


This could be a powershell script that should only take 10 minutes.


Reminds me of the swing Project management cartoon


My gut reaction is that the student was right. The spec was: "log all urls from clipboard into a file". After the fact teacher in question came up with more specifications, which is fine, but when the scope of the project changes we need to reconsider the time tables and work required.

This is normal software development. PO/customer wants something. You give your estimate. They accept it and you start working and then middle of the sprint or in demo or during testing they come up with new requirements. These were not in the scope of the original project and thus need to be re-evaluated with new estimates and possibly new project.


Sounds like every grooming meeting to me.


I feel like the Dunning-Kruger effect is not mentioned enough on HN in these contexts https://en.m.wikipedia.org/wiki/Dunning–Kruger_effect


It says that confidence correlates with skill, just not perfectly. How is that relevant?


One man's poor estimation is the other's scope creep.


Surprised I haven't found https://xkcd.com/1425/ in this thread.


I think the student’s first instinct was about right. For a program idea that sounds like it’s kind of a user whim ‘I think I’d like to have a log file of URLs I copy pasted’, it seems like it would be good to get a prototype into the professor’s hands to see if he actually finds it useful or if in fact it just winds up recording a bunch of URLs he never looks at before investing more than a couple of hours in making it super user friendly.

Instead, he just explained to the students why it’s much more complicated than that, probably discouraged them from trying to build anything at all, and as a result is no closer to having any kind of way to capture the URLs he copy pasted during the day. No value was created in this exchange.

My overall reaction to this is this feels very much like an academic approach untethered to what kind of practices actually work for creating products in industry, and instead is working to inculcate students in bad habits that will turn them into the kind of developer who, when asked to solve a problem, comes up with a long list of reasons why it’s too much effort to be worthwhile, rather than a list of ways to figure out how to solve the problem.

In particular, this pattern of listing ‘but have you thought about…’ questions is amateurish product owner BS and it does students a disservice to suggest this qualifies as any kind of requirement analysis model, let alone a first step in deciding how to go about building a piece of software. It is a great way to stop yourself from even starting.

Requirements gathering doesn’t consist of guessing what things a user might want, or even asking a user what they think they want. Yet the professor goes off positing a need for a pause mode, timestamps recording next to URLs, log encryption… all of which are maybe interesting but none of them seem like they make or break the viability of the product.

On the other hand, if it turns out you can’t actually make a program that has sufficient permissions to read arbitrary clipboard contents without triggering a malware detector or getting blocked from installation on a university computer, that would rather screw with the entire project, so maybe focus on verifying that you can at least do that first before you start wondering about what color the system tray icon is going to be.

And maybe your research will turn up some interesting affordances or edge cases in the clipboard API that you hadn’t thought of that take the project in a completely different direction?

The fact one of the students said something important that identified a key risk to the entire project concept: ‘I’ve never interacted with the clipboard api before’ - and then that got glossed over for discussion of all the shiny affordances that could be bolted on to the solution - was a real miss of a teachable moment.


Sounds like Windows software.


A daemon that writes the contents of the clipboard to a log file would be a reasonable utility. It does not need to be pauseable; just kill the process when you don't need it. It doesn't need to only record URLs; just use grep to filter the log file. This is how I would write the thing for my own use.

The professor gave the students incomplete information. What they said is perfectly reasonable. This is a project they will be doing on their own time, so the scope of it is entirely up to them.


Yes because I already have Terraform that will spin up a secure prod k8s cluster in any of the big 3 cloud providers.

All that’s left is compiling a deps list of react components.

English is far more verbose than machine languages. It can easily make a problem seem like there’s a lot more to consider than there is if you’ve already got a corpus of syntax to leverage.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: