Sadly just an outline but I didn't mind that. Good read.
I'd add a few things I've noticed over the years. Great developers like to pair program on tricky stuff, because they always learn something. They will try 2 different implementations if they're not sure which is best, and then the right one will then be obvious. They back their arguments with real world proofs. They try most fringe/new technology, even if it's not right for the current project. They hold large amount of domain knowledge in their heads. They admit when something is twisting their brains, draw it on paper, talk about it and then bang it out. They fantasize about what-ifs, in a perfect world, scenarios. And they love to share knowledge.
I have actually found the opposite. The best programmers I have known are often the most reluctant to try new technologies. At least until the technology appears to have reached some sort of critical mass and it has been shown to be virtually guaranteed to increase their already very high levels of productivity. Partly this is just an experience thing. After you see enough shiny new things wrapped up in spin and hype the task of peeling that stuff away to get an honest assessment becomes unappealing.
I'm definitely in this camp. I read enough programming related news that I know what new technologies are emerging and what problems they attempt to solve. But these days I wait until things have gained enough momentum that I think they'll stick around for at least a few years before I'll even consider adopting them for real work.
That said it's good to keep learning new things just for the sake of learning so I will still sometimes pick up something new just because it looks different enough from things I'm already familiar with.
You should hesitate to introduce fringe technologies into production systems while they remain fringe. But you should absolutely evaluate them, figure out which ones you want to keep an eye on, and learn ideas from them that you could apply elsewhere.
I do tend to try things when they solve a problem that I have. I may be missing on problems that I don't know that I have. On the other hand, I frequently see people using tools to solve problems that they don't really have.
From what I've seen, they are eager to try new technologies but not necessarily new implementations. Just enough to get an idea of where this tool fits in their toolbox should they ever need it.
I am in the same camp. I'd prefer to know an older technology (and it's underpinnings) completely than scratch the surfaces on a bunch of short lived new ones. Most of the new stuff I see is pretty high level. I invest my free time in getting as close to the metal as I can.
If there's something new with a feature I like, I'm a lot more likely to try to replicate their feature in something I know than make a switch.
Even your statement isn't new! 2200 years ago King Solomon wrote, "There is nothing new under the sun." We just keep finding different ways to rehash the same core concepts.
I would have thought that in area like IT where everything gets reinvented every decade (or less) a master programmer wouldn't have to look at the details to assess a "new" technology.
I hate 'trying everything' because I never feel good at anything, and more often than not "new technologies" in this scene are just someone else's old ideas, rehashed and made more complicated. Mastery requires focus.
I'm no master, but I am middle aged. I like to learn at least one new technology a year; I always learn something from working hands on that I might not have understood just from ideating about it.
Something that is helping me a lot recently is trying to know all there is to be known about the tools/concept that I am using and the problem that I am solving. Too often have I used tools I half understood to solve problem that I didn't define clearly enough.
Yeah, this is a bad pattern I started noticing even way back when I mucked about with Wordpress theming. Hell, I could go further back and notice this pattern in assembling LEGO. And sadly I still catch myself doing it even now whenever I'm caught up in this "this shit doesn't work, let me try that real quick" loop, where 'that' is only one of many variables I don't really understand.
What I've noticed works best for me is to nip it at the bud rather than bail out of that loop. Because once I'm stuck in the loop, I find it hard to sit back and think about things properly.
The worst cases for me is when I try more than one 'new thing' at once, which usually happens in new projects. Most recently I set up a new project and decided to try out typescript, a new back-end tool, and another build process all at once. I couldn't get it working, and only once I dived into it I discovered there was one 'little thing' (javascript's current module kerfuffle) that caused most of the issues.
What I hates most about these episodes is that at the end, all I learned was a tiny bit more about disparate systems that I still don't master. Huge waste of time.
Well, usually the first time I'd have some new package of Lego, I'd want to construct the exact thing I bought from the instruction booklet.
Sometimes I'd be working on different parts at the same time and use a piece that was similar to another. I recall a few times looking everywhere for the missing piece or disassembling a part to find it only to find out that another little part I assembled had the missing piece.
I couldn't upvote this enough. It's even a way of life that is larger than programming. Most of my math issues were about how I didn't really see the extent of a concept or the problem. When done suddenly things become extremely less resistant. Patience, depth, focus, lucidity.
There's a balance though because I learn a lot by DOING, I can only read/learn/think about something so much before it just gets fuzzy and I can't keep anymore in my head.
I agree. I actually think it's the single most important skill for beginner to intermediate programmers. Not structuring the code or managing complex abstraction, just having a reliable mental model of the program and the environment.
I've often seen beginners struggle because they get so used to the compiler or the (often manual) tests catching errors that they just try things "to see if they work" without understanding what they're trying. But they don't consider two important questions: "what do you want to happen?" and "what do you expect to happen?".
If you can answer both of those, you can figure out whether your code will work before you write it. In many cases when programmers struggle it's because they don't know the answer to one or both. They're left just trying stuff until something works, but there's a lot more things that don't work than things that do, and a lot more bad solutions than good ones.
Sounds like a good idea and seems to be worth a shot. Actually, I am thinking about why I put so little effort into this, currently. I will take your suggestion and see how it goes :)
A similar one when dealing with compiled languages with good type systems (Haskell, OCaml, Rust, Mercury) is to periodically build even when your code is broken and try to guess what error the compiler will give. This helped me a lot with those languages.
My process in these languages (Haskell I know pretty well, now in the process of learning Rust) is to sprinkle in `undefined`s and `unimplemented!()`s and whatnot so I can continuously use the compiler to check my work without actually running the code.
It's a good way to test assumptions and refine a mental model, and it's also just plain useful for catching boneheaded mistakes as you go. I'm not sure if I'd say I use it as a crutch, but I certainly miss it when I have to use languages like ruby and Python.
I may be mistaken, but I actually remember Kent Beck saying it in his original TDD book. That was the thing that made me understand how refactoring can take the place of big design up front. The best part is that you only work on things that you need as opposed to wandering around changing every piece of code that you don't like ;-)
I didn't mean the entire Kent Beck bibliography, this is obviously a different thing than his books or he would've referred to his books. I just would like to see this particular outline expanded slightly is all.
About this item, I got me wondering: Should we never delegate a difficult work for someone that is specialist if it give us pleasure? Always we need to learn it first?
I don't agree with this, maybe I need profit instead of fun. I can found fun in a lot of other things that I can't delegate on moment.
My top piece of advice:
Programs behave predictably, when something impossible is happening it's because one of your assumptions is wrong. When that happens you'll find the bug the moment you start testing your full set of assumptions.
For some reason, even though this is invariably true, my friends at school didn't appreciate "I can't understand why I'm seeing this weird behaviour", "One of your assumptions is wrong!" xD
>inaccurate documentation.
With the exception of this one, I usually assume that you need to verify the behaviour of a library/feature you've not used before.
These are attributes that everyone can, on honest personal judgement, can mistake to possessing themselves to varying degrees. But it might be a useful list to read through when stuck at a problem that is simply not giving way.
Unless we have the chance to learn from and be coached directly by a master, what would be helpful is narratives on how they think and solve problems.
The best I have so far come across in a book is "Coders At Work". https://github.com/aosabook/500lines promises to be another. Rich Hickey did great service by talking pragmatically about the meta aspects of programming through Hammock Driven Development and Simple is not Easy. Dijkstra's and Alan Perlis' writing that has been gaining a resurgence in popularity is also of a similar ilk. http://damienkatz.net/2005/01/formula-engine-rewrite.html also is an intriguing story.
The article makes a number of good points. The first three points in the "Learning" section resonated very well with me.
Then there's stuff I just don't understand. For example:
> Multiple scales. Move between scales freely. Maybe this is a design problem, not a testing problem. Maybe it is a people problem, not a technology problem [cheating, this is always true].
Great programmers often take a step back and ask themselves whether the problem they are encountering is only a symptom of a far larger problem, e.g. "maybe this entire component needs to be refactored instead of just tweaking these tests." or "maybe when customer asks for X what he actually wants is Y"
He might be suggesting that you can consciously consider the problem different at levels of detail. You can step back and look at the problem in its wider context ("the big picture") and you can also zoom in and focus on aspects of the problem in more detail.
After reading that, I don't feel a bit smarter then before. That's usually how it goes when you make a bold, universal statement about something and put it into 10 lines of text.
I aknowledge what Kent Beck has done and what facebook is doing but this doesn't deserve to be on HN front page.
True, but then long protracted explanations don't necessarily do a better job, they are prone to missing the forest for the trees. The beauty of the short bullet point is that it makes the practitioner think, sort of like a zen koan. It will be utterly useless to someone who hasn't already put in the practice to be on the brink of enlightenment already, but then again, what other way is there to truly learn?
(゚ヮ゚) Anyone else knows other paths/checklist from beginner programmer to
expert/senior programmer in different domains (front-end,back-end, dev-ops/sysadmin, android, ios, system programming, gaming, 3d, image, video) ?
I also think it important to understand the nature of the environment your problem is in, because it so much flavors the approach and the solution.
If in computer and data science, emphasis is on algorithms, data structures and ADTs. But if in business, commerce and industry it's the representation of complex domain concepts, real and abstract, and their interactions that are key.
In some ways there is a fundamental divide between the two. While the ops advice is valuable, for me an understanding of where and how to apply techniques across that divide is one of the biggest impediments to "mastering programming".
"Good design is about taking things apart."
--Rich Hickey
I think that statement captures several of these. He says it in the context of methodology and "architectural agility" in a great talk called "Simplicity Matters." [0]
This article should be called 'Mastering large-scale team programming'. In reality there is no single correct approach to programming. All programmers/engineers/developers have different specializations.
Some developers are really good at getting an MVP out the door quickly but their code may not quite work at scale. Others are good at working in large teams on large projects, others work better alone or in small teams. These different types will produce different types of code - the utility value of various programming habbits changes based on team size, project size and urgency requirements.
There could be some 'Master MVP programmers' and 'Master team-player programmers', 'Master large-project programmers'... You can rarely put them all under a single label - As developers we tend to get stuck with particular styles depending on which kinds of companies we have worked for.
It is not quite correct to assume that because a company is financially successful and handles many millions of users, that its methodologies are the only correct way to do things.
Programming is an adaptive skill and should change based on economic/scale requirements.
I wouldn't say that this article is describing a correct way to do things. It's more of a set of guidelines to follow, and I'm glad someone wrote them down.
I've been doing these things, but I generally have a hard time concisely describing it. Of course, these aren't hard and fast rules. More of a guide to thinking about problems to help programmers be more efficient and accurate... More precise...
I don't really know how to describe it, but one day, you find yourself doing these things more... Stuff gets easier... Things just start to click.
I'd like to add one more, though maybe it's really just another way of putting one of the other statements.
Stop worrying about all the unknowns in the project. Work on what's known, usually the unknowns will become more clear as you progress.
yeah many are really important, like the part about 'calling your shot' - I constantly see so many people testing solutions to problem at random, either copy pasted from internet or built using random auto-completion; they spend days and days circling around without ever understanding the changes they're making nor the goal their building towards.
It's doubly painful when days after they come up with the random set of lines that produces the intended result and dozen unintended side effects and proudly declare 'see! I did it'
the fact is this stuff has been in literature from the seventies. I found very little in this article that couldn't be found in 'code complete' or 'refactorings'. I just wish at some point developers will stop learning everything from scratch.
> Stop worrying about all the unknowns in the project. Work on what's known, usually the unknowns will become more clear as you progress.
Good advice which applies not only to software development. I've found that focusing on what's currently known and on what I can change removes anxiety, allows me to move forward and then usually the unknowns resolve themselves along the way.
It works very well, until it doesn't. Then you have a pretty interface, an elegant test suite, and a big black box full of entropy and ignorance labeled "then a miracle occurs".
Even when that does happen... and I'd argue that it often does not, if you've done the work to make your interface pretty, and write a precise, accurate test suite, you've usually done the work to make the actual functions neat and orderly...
Even when you end up with a black box full of entropy, it's segregated from the rest of the system. You can feel free to change the rest of the system around it and know that the black box will keep doing it's job, as long as you keep using the well-written API within spec.
But, that's not all you get... because you took the time to write the tests, you can refactor this entropy box to your heart's content... until it starts looking more approachable. You've got your tests around the API, right? Then you don't even really need to change them to do the refactor work. The only reason to change the tests would be if you want the function to do something else (or you missed something, of course).
Take small steps... refactor out a couple lines at a time... run the tests with each iteration. Based on previous experience, you're going to end up with a fairly clear and concise implementation... not to mention performant.
Absolutely! Get something working and then keep it working while making small steps.
On the other hand, the example that comes to mind is Ron Jeffries' TDD sudoku solver.
I've seen several systems where the magic black box is doing things hilariously wrong---as long as it works on the test cases and the production results are sufficiently difficult to verify, it'll be accepted as gospel.
I agree. They all seem quite familiar, though more abstractly defined than I would define them. Perhaps that is why I am now a bit tired of development - too many new specific details to learn about the 'latest and greatest', but fundamentally it is all the same stuff anymore. Maybe I'll pick up Haskell, just to think in Lisp-ish terms again... But statistics and ML in general are now drawing me in - and I hated stats in college. Brand new concepts.
Like LoSboccacc said, all this stuff is in 'Code Complete' and the related series of books. Another I'd recommend is 'The Practical Guide to Structured System Design' and 'The Psychology of Computer Programming' (about egoless programming). Old books, but well written and perhaps seemingly basic considering how complex React/Node/javascript... seem to be, but the fundamentals never go out of fashion.
Absolutely disagree with you, slicing problems to small and focusing one at a time and all other good points is a must if u want to be good at any kind of programming and even more even other skills outside programming.
Not if you have a hard deadline and your client is launching a massive advertising campaign for your product in a few hours. Also if your project is part of a promotional campaign, it will be up for a few months and then the whole thing will be torn down forever - You don't want to over-engineer it.
I only worked briefly in the digital agency space, it wasn't my thing. My workflow very much adheres to the points described in the article but I wouldn't say that they necessarily reflect all that it means to be a 'Master programmer'.
It seems highly unlikely that nothing would be reusable across campaigns. Identifying the commonalities and isolating them into subprojects that you build out in small increments would seem to be a benefit.
In fact, I've been to a presentation by a digital agency on how they did exactly that and improved the time-to-launch on new projects as a result.
Besides, you don't want to overengineer anything. Refactoring will happen, so the design can be deliberately minimal at the start if proper refactoring practices can be applied.
If, by your original comment, you meant that if your project is simple enough you can muddle through like this, whereas with a large project you can't, I agree with you. But just because bad design doesn't scale up, does not mean that good design doesn't scale down.
The bottom line is good design is fractal. You absolutely need it to achieve the largest scale, but it pays dividends at any scale. While it's true there are a lot of cases where it doesn't matter, doing so will never make you a master programmer any more than chainsawing a log to make seats around a fire pit will make you a master carpenter.
> This article should be called 'Mastering large-scale team programming'.
To be fair, that IS the hard kind of programming.
Small, focused teams of talented developers can do things pretty much as they like and are likely to have a positive outcome. (That's why I'm skeptical of typical "agile team" success stories; if you care enough about your job to identify with the methodology, you are probably a competent developer that would get results with mostly any methodology ... it's a self-fulfilling prophecy.)
All spoken brilliantly as someone who has not seen the nuance beyond the post. The Facebook note was written by Kent Beck, father of eXtreme Programming (XP) and original Agile manifesto signatory.
He is not saying follow this methodology because a company made money.
Invert your thinking; he is saying the majority of companies that utilise the following values and techniques have far fewer failed projects and deliver far more projects to scope, meeting customer requirements in a suitable timeframe.
To say that this man needs to consider other approaches is akin to saying Muhammad Ali should have considered other boxing styles ;-)
The man created a methodology [1]. That is wonderful but (imo) it doesn't give him the chops or credibility to tell other people how to program or solve problems.
Think about it.
Does Grady Booch (builder of yesterday's, now forgotten methodology) have some kind of special wisdom to dispense? If not why does Kent Beck? "Mastering" Programming, heh.
That said, sure no one can argue against such 'motherhood and applepie' statements at such a high level of abstraction.
Here is one from me "Think about what you do and act accordingly". you are saying "well duh?"? Exactly my reaction to this pablum.
Not sure any of these aphorisms are particularly relevant in practical work. Still, whatever makes people happy. If you find these useful, good for you.
[1] Not ignoring his work on JUnit which I used extensively when I used to work in Java. Of all the agile gurus, I respect Kent the most, because he has actually written useful code. Just playing devil's advocate a bit..
> If not why does Kent Beck? "Mastering" Programming, heh.
He is also the person who (re-)originated test-driven development. And his day job for the last few years has been mentoring Facebook's new engineers.
And you ignore that he answered your question in the first paragraph of the piece. You might agree or disagree with his explanation, but ignoring it just looks sloppy.
"From years of watching master programmers, I have observed certain common patterns in their workflows. From years of coaching skilled journeyman programmers, I have observed the absence of those patterns. I have seen what a difference introducing the patterns can make."
Is this the paragraph? I fail to see how such self declarations of uber competence and self labeling as "master programmer" should be accepted by others on his say so. Sure he originated/pushed TDD.(and what happened to that project on which all these 'masters' worked?) You seem to think it is a good practice, worthy of elevating Kent to 'master'. Which is fine I don't.
If his day job is to train FB engineers, and he enjoys it, good for him. If Facebook needs its engineers thus 'leveled up' by TDD etc, good for them. It is a free market.I have no quarrel with any of this.
However in my experience, the very best programmers (in any subfield of programming - Linus/Carmack/whoever, or even very good anonymous programmers working on simple CRUD systems) don't go around calling themselves 'master programmers',putting themselves at the top of imagined pyramids, or offering pithy aphorisms about how they can 'coach' other 'journeyman'(and so lesser skilled as compared to 'master' programmers) into 'mastery' by following "patterns".
This is just standard agile coach/methodologist talk. If someone calls himself a 'master' programmer, he better have world class code/coding skills on a consistent basis to back it up. Methodology religion propagation doesn't cut it (imo, ymmv and that is all right).
He claims he coached 'skilled journeymen' programmers.
The 'master - journeyman-apprentice' pyramid jargon is part of the 'software craftsman' movement. In this structure, 'masters' train 'journeymen' who serve an 'appreniceship' and help them breakthrough into 'mastery. Lots of jargon from the old guild structures.
So Kent is implicitly (imo) claiming to be a 'master'. At the least he is claiming to better than "talented journeymen" engineers at FaceBook.
That said, I grant you he may be using the words without that implication. Not likely, but possible.
Whatever. All these agile/methodology guru types like to pass themselves off as skilled programmers without any supporting evidence and should be (imo) ignored totally when they pontificate about how others should program etc
He is claiming to be a coach. Coaches don't have to be master players to be good coaches. The rest is stuff you are making up because you have an axe to grind.
Which, fine, grind your axes. But maybe you could stick to ranting about what people have actually done and written rather than just barking about things you imagine. For somebody very concerned about the credibility of others, you aren't working very hard on your own.
Except we don't all do boxing and we aren't all Muhammad Ali. But some of us may be masters of our craft nonetheless.
These 'guidelines' would mean very little to someone who programs microcontroller chips - Not saying these guidelines aren't useful, but in some environments there are more important aspects like will my program fit in flash memory? Do I have time to refactor this code or should I just patch it quickly and work on something more urgent/important?
I have no idea what it should be called, but it is too abstract and short to get any use out of it.
I'm not so sure about splitting developers into those types either. Sure, there are different personalities and specialisations out there, but... MVP programmer doesn't sound like one. It's too focused.
But I fully agree that "Facebook" doesn't automatically mean success and good practices, just like "well known person" doesn't automatically mean being right.
While the author is known (technical coach at Facebook, creator of XP software methodology), I sort of disagree.
You can follow this guide and still be a low value programmer. This guide won't take you to mastery level.
And, there is also a sense of irresponsibility around one item: "easy changes". Easy changes as in, duct tape programming? That's pretty much turning your project into a Jenga tower... you add your "easy change", that incurs technical debt, fix a problem... but lower productivity for following changes. Also sets a bad example for other people to follow.
The step before the "easy change" is "make change easy". This usually includes refactoring and paying off technical debt.
Anecdote from one great programmer I know: When he fixed a single bug, it often came split into multiple commits. First, a few commits refactoring and cleaning up things. Then one tiny commit fixing the bug. Finally, one more commit removing now-obsolete stuff.
Despite hating anything XP, I can strongly relate to this one:
> When faced with a hard change, first make it easy (warning, this may be hard), then make the easy change.
It aligns well with my natural process: for any problem, spend most time designing and implementing a DSL for it, and then solve it trivially in this DSL.
I am not blindly following it. I think it is sound practical advice given by a titan of engineering.
I have yet to read anything of yours which says why I shouldn't listen to Kent Beck.
For every well respected opinion or technology there is always a group of naysayers. A cadre of individuals who offer nothing of insight in return.
In 1906 John Philip Sousa claimed the phonograph would ruin music. Don't be that guy.
Don't be the guy that says GUI's will never take off, that touchscreen phones will not sell, that the Godfather was a bad movie etc.
You are being contrarian for the sake of it. Kent Beck has a massive canon of work which advances the predictability and success rate of software development. You can challenge bits of it, improve it and contribute.
You are being that guy that makes sweeping generalisations which advance nothing.
What, specifically, do you think needs improved in the OP's article?
EDIT: Just reviewed your Twitter. It's 2000 tweets of snark and criticism of everything you come across - GoT, other coders, Agile, politics. I doubt we will get anywhere constructive on this thread but my original post remains below.
Original post >>
"Doesn't give him the chops or credibility to tell others how to program or solve problems."
...that cannot be serious. Am actually smiling at that. Also laughing at the idea that modern coders think they have nothing to learn from Grady Booch.
Plinkplonk you are absolutely someone I would never want on my team or contributing to a product I was involved in. Aggressive, combative and dismissive of the precedents that laid the foundations for modern software engineering. You need to mature (my opinion). Your post has not painted you in a flattering light.
But feel free to prove me wrong - in your eyes what DOES give someone the chops to support others with engineering advice? What do they need to have accomplished?
We've banned this account. Personal attacks, name-calling, and flamewars, all of which you've made an egregious hash of in this thread, are not allowed on Hacker News.
If someone has built significant cutting edge software and/or has proven himself to be a 1 percent engineer, I'll gladly listen to her on matters of engineering. That doesn't make her an expert on say designing space craft.
The point is Booch hasn't written any cutting edge software. He just sold methodology (and books and consulting). Which makes him an expert in selling books and consulting.
I don't want agile gurus/methodology vendors telling me how to do programming. I'll gladly listen to their advice on how to build a career around a methodology. Just a personal preference.
I mean if I wanted to know how to write a 3d game engine, I'll listen to John Carmack or Tim Sweeney, not a methodology consultant. If I wanted advice on investing, I'd listen to someone who has proven chops as an investor - say Warren Buffet.
again, just my personal preference.
I'll ignore the personal attacks and your comments on my twitter account etc, which has nothing to do with my comments here and is borderline stalker behavior.
Whatever floats your boat. All good. This is the internet
Not borderline stalker; was just interested in your experience. Was hoping you would blow me away with cutting edge engineering you speak about. shrug
Agile vendors don't tell you how to do your job. They tell you how your job fits within a whole and that whole can be delivered quickly if you play nice with others.
Not surprised you struggle with the concept given your tone. Clearly not a growth mindset and fixated only on your technology.
Nice summary of Booch by the way, neatly side stepping his programming experience and Master's in electrical engineering or his work supporting design patterns. eyeroll
The hate is strong in you.
I would also add, with a little glee, Agile vendors are not going anywhere and you will be listening to them for a long time to come; you know why? Your boss listens to them. Don't be bitter at consultants, the game chose them.
" They tell you how your job fits within a whole and that whole can be delivered quickly if you play nice with others."
These guys have no special expertise in this either. They haven't (mostly) worked on high performance programming teams or led successful companies or launched killer products. But if you give them a chance they'll try to sell you 'methodologies' on all these and more.
They make their living selling ill thought out ideas to clueless middle managers. What makes them experts in "how your work fits into a whole?"
Lol getting a masters in electrical engineering and having written forgotten books on design patterns makes you an expert in how to 'master programming' and work in high performance teams? News to me!
You understand Booch did not write the master programming list referenced by the OP. (No doubt you will edit to remove that erroneous reference but shrug)
Saying Kent Beck, Ken Schwaber, Jeff Sutherland etc has not led successful companies or worked in high performance programming teams he he he. That's class.
I hope this post is immortalised :-D
So if Agile is ill-thought out, what is not ill-thought out? By all means, you have the stage...
>However, you also, worryingly, faked 3 comments at the bottom of your own blog post, one pretending to be a Scrum Master saying they are taking the money. They are time stamped and clearly written by you.
WTF?
you spend your time digging up my old blog posts from years ago, accuse me of faking comments on my own blog post, (lol what? evidence? or are you just high and imagining things?) and then call my behavior abnormal?
Heh. Whatever man. This is getting creepy.
I've had an anti-agile point of view [1] for a long time now. so you went hunting and found a blog post which I wrote years ago elucidating that view. So what? This thread is getting meaningless and you are beginning to creep me out with this behavior and personal attacks, so I'll exit this too-long thread and leave the floor to you.
I simply searched for the people you referenced and the word Agile. I had no idea your blog would be ranking first page for anti-agile rants.
You find it creepy that you list your twitter profile and you publish an online blog and people then read your Twitter and online blog?
I def believe and am willing to bet a month's salary that you faked those blog comments. I am more sure of it than my own name. We both know you did. It's transparent.
Please don't post like this here. Comments on HN need to be civil and substantive. If you want to make a criticism thoughtfully, that's fine, but don't call names.
I'd add a few things I've noticed over the years. Great developers like to pair program on tricky stuff, because they always learn something. They will try 2 different implementations if they're not sure which is best, and then the right one will then be obvious. They back their arguments with real world proofs. They try most fringe/new technology, even if it's not right for the current project. They hold large amount of domain knowledge in their heads. They admit when something is twisting their brains, draw it on paper, talk about it and then bang it out. They fantasize about what-ifs, in a perfect world, scenarios. And they love to share knowledge.