Group of people == team because all report to same manager.
Manager in charge of multiple different projects because "success in organisation" == "number of people managed" regardless of whether they are being managed well or not.
Manager needs daily updates of all things going on his "team", because otherwise not have fucking clue about what manager is "managing" and clearly far too difficult for manager to read JIRA board, look at commit histories etc. Much more efficient for everyone to sit around for an hour during which they maybe have ten minutes of productive discussion.
That's actually not been the case in many teams I've worked on - but we have all been working on the same set of related products (or subset of a product). Actually the managers often weren't even in the teams. I foolishly assumed that's how most people used the term "team"...
Lots of people here seem very upset at the idea that someone could write their memoires at thirty.
While it wouldn't be something for me (either to do or to read), I can think of lots of reasons for doing it which would not have to be indicative of a disordered personality.
Maybe the author is just the kind of person who parses experience through the lens of writing. Writing down notes about what you're doing, what you feel about it, is could be a way of understanding it more richly. And there ARE advantages to records made in the moment. Yes, the author could wait until they're fifty and have achieved 'wisdom' or 'success' whatever the hell either of those things mean, before they look back on their life. But the record they'd come up with would be edited and distorted in so many ways that it would be useless as a description of how this person felt in their twenties.
Sure there's a chance that such a record will be self-indulgent, stupid, vacuous etc. But there's also a chance that writing of this kind will be one of the things which helps the author to grow. And even if doesn't, so what?
I've been working in IT for 30 odd years, 24 entirely office based, 6 some combination of hybrid and home, and in my view the argument that innovation and creativity are uniquely fostered by interactions in the office is seriously over-stated.
This is obviously just annecdata, but in my working life I can think of literally zero ideas which emerged from just happening to run into someone in the canteen or one of the corridors. Most work conversations are vacuous, time-wasting bullshit. Fun if you've nothing better to do, and nice for greasing the social wheels, but rarely a source of brilliant transformation.
In a depressingly large number of work spaces, creativity and innovation are rarely relevant to your job description anyway. You're not there to transform the business. You're there to churn through whatever piece of nonsense has been handed down from on high, and since in quite a significant number of cases the work you've been given will make zero practical difference to the overall well-being of the company it doesn't really matter if you do it well or badly.
Even if that weren't the case, what is it that is supposed to make serendipitous encounters inherently better than structured idea development? Are we really supposed to believe that just happening to encounter the person who holds the missing puzzle piece of what you're working on are exactly the moment you need it is the best way of generating transformative ideas? I'm not saying that such random encounters don't happen and can't be significant. But one reason these things turn into corporate legends is because they are rare and outlandish. "I said something in a taxi or a bar and someone else said something that turned into $$$" is a story, whereas, "I was stuck so I laid out the problem in a Slack channel or took it to on online brainstorming session and a couple of my co-workers helped me fix it" isn't. But that's because the first case is a weird freaky miracle, and the second is just an ordinary, effective way of running a business.
It starts off by suggesting that it's all about how COBOL's superiority stems from having support for Binary Code Decimal as a language-level element, rather than having to be imported via a library (the overhead of which really starts to matter at the volume of transactions which COBOL is typically required to handle). But then broadens the discussion out to argue that the intrinsic shape of the COBOL environment "stack allocation, pointers, unions with no run-time cost of conversion between types, and no run-time dispatching or type inference" is fundamentally different from languages like Java or C#, and those differences provide provide performance benefits which cannot be easily obtained in those other languages.
It’s easy to look at this article, see assertions which are absolutely the opposite of ones personal experience and conclude that the author is an idiot. For example, I almost entirely reject the idea that there is utility in making it hard to look at the text that you’re typing. I’m assuming it’s true for the author (why else would he say it?) but it almost all the work I do depends on me being able to scan around the content that I’m working on, and the more screen real estate I have to play with, the more effective I am.
All the same, I think he’s hit on something important, and people who accuse him of pushing a “one size fits all” agenda are misunderstanding his argument. There are plenty of people out there for whom the “clamshell laptop” is a good match for their computing needs. But that’s not the point. The point is that if you are not one of those people, then you’re basically stuffed because you have no choice but to pay for hardware components which you do not need and which actively make your working / playing / living experience worse.
I am one of those people. I hate the non-reactive, low-travel keys which are ubiquitous on laptops. And I hate the screen. The specifics of my eyesight mean it’s always in the wrong place and it never shows me as much content as I need to do any of my jobs. If I’m going to do any kind of work, my first actions are to plug in a mechanical keyboard and attach a much larger vertically mounted screen. But I (or, more usually, the company that happens to employ me) inevitably end up forking out not inconsiderably amounts of money for keyboards and screens that I’m not going to use and which actually make my working environment less ergonomic and more frustrating to work with because one of the key requirements of my job is that it is done in different places, which mean I need to have a laptop, and it is simply assumed that a box with a screen attached and a keyboard built in is what a laptop is.
This is the real strength of the forever computer proposal. Not that those who find a clamshell laptop a good fit to their needs are “doing it wrong” and need to change their expectations and working practices, but rather that those who do not find it a good fit should not be required to put up with it. The company I work for is currently recruiting new developers. Each one one who joins will be given the same laptop preshipped with the same software because we all do “the same job”. This totally ignores the differences between our physical and cognitive attributes and the places in which we will find ourselves working. Allowing our hardware to truly reflect our different needs would be better ergonomically, emotionally and economically. Sadly, it seems to be something that simply cannot be done in the modern hardware ecosystem.
There was a time when computers and information technology offered the prospect of autonomy, customisation, choice. Increasingly it feels as though the real proposition is, “You can make any choice you like, so long as it can be monetized and maximized in terms of revenue generation.” If you want something which suits your personal needs and makes your life better, but doesn’t generate plus signs on some financial planning spreadsheet then, sorry, you’re shit out of luck.
The idea of the vomit draft works for narrative text because it's aimed at human consumers and humans are very adaptable when it comes to accepting input. We can absorb a whole bunch of incoherent, inconsistent content and still find the parts in it which make sense, do useful and interesting things.
An executing program is a lot less forgiving, for obvious and unavoidable reasons.
What TDD brings to the table when you are building a throwaway version is that it helps to identify and deal with the things which are pure implementation errors (failure to handle missing inputs, format problems, regression failures and incompatibilities between different functional requirements). In some cases it can speed up the delivery of a working prototype, or at least reduce the chance that the first action that the first non-developer user of your system does causes the whole application to crash.
Genuine usability failures, performance issues and failure to actually do what the user wanted will not get caught by TDD, but putting automated tests in early means that the use of the prototype as a way of revealing unavoidable bugs is not downgraded by the existence of perfectly avoidable ones. It may also make it easier to iterate on the functionality in the prototype by catching regressions during the changes cycle, although I'll admit that the existence of lots of tests here may well be a double-edged sword. It very much depends on how much the prototype changes during the iterative phase, before the whole thing gets thrown away and rebuilt.
And, when you come to building your non-throw away version, the suite of tests you wrote for your prototype give you a check list of things you want to be testing in the next iteration, even if you can't use the code directly. And it seems likely enough that at least some of your old test code can be repurposed more easily than writing it all from scratch.
I recently had to install a copy of Windows 10 Home onto one of my spare laptops, and there are at two things that hit the "stupidly intrusive and annoying" buttons.
One: moving the mouse around on the task bar will suddenly pop up this ludicrously over-sized "Start" menu, generally featuring some "critical" piece of "news". These will often include pictures of political leaders, and such is the state of the world that these images produce a reaction like a mild form of Post Traumatic Truss Disorder. I imagine that with some poking around I could get rid of these, but a civilized OS would let me ask for them if I wanted, not simply shove them in my face.
Two: I try to avoid using this lap top except when I absolutely have to, but when I am forecd to turn the wretched thing on, I am invariably greeted with a big blue screen inviting me to "Complete my Windows set up." This apparently includes setting up OneDrive (don't need; not going to do) buying an Office365 subscription (definitely not going to do. LibreOffice is more than enough for what I want to do on that machine. Certainly not going to throw 10 pounds a month at Microsoft for no material benefit). And various other things which I also do not need and do not care about. And the options at the bottom of this screen are (I paraphrase) "Do Now" and "Remind Me In Three Days". Where, exactly, is the "I'm happy as I am; just leave things alone" button?
I am never more appreciative of my Linux desktop or the applications installed there, than after spending an hour using Windows. Linux does me the courtesy of assuming that I am adult who knows what he wants and can work out how to do stuff when I need it. Windows treats me like a sub-moronic five year old who has to be constantly distracted with toys and sugary treats.
Becoming a "good writer", like becoming a good anything, is a matter of practice and study, of learning to use the tools which language provides. The ability to construct well-formed long sentences is one such tool. It may take time and effort before you can use it effectively, but this is the case with most tools which are worth using. To reject this tool because you are afraid you may use it badly is to needlessly limit what you are able to do.
This might be a worthwhile trade if you really could improve your text by only using short sentences, but sadly, you cannot. The central unit of meaning in written text is not the sentence, it is the paragraph. If you take a sentence that is long, discursive, incoherent and ill-formed, and break it into short sentences, this will not magically resolve all other things that are wrong with it. Fixing the text entails organising its elements so that they develop logically. If they are entertaining, rhythmically varied or otherwise engaging to read, so much the better, but the primary challenge is to organise what is said. If you can do that when it comes to the relationship between your sentences, then it's not any harder to do it with the clauses in a longer sentence. And if you can't, your prose will still be bad.
Having had a range of early programming experiences, including BASIC and assembler on a ZX Spectrum,Pascal on an ICL mini and PL/1 on an IBM Mainframe,nothing I had seen or worked with up until that point came close to Visual Basic for hitting that sweet spot between power and ease of use.
I never got a chance to use Delphi in any serious way,so that might have been an alternative contender,but certainly VB was orders of magnitude easier than building Windows apps in C++ and Microsoft Foundation Classes, which was the other main alternative offering at the time VB3 appeared.
Non-programmers could use it to put together something that looked reasonably good and would work reasonably well without having to fill up their heads with a hideous morass of technological complications,and if you did have past development experience then it took away immense amounts of the pain and frustration which had previously been an inherent part of programming for Windows.
Having worked for two separate companies which managed to create serious long-lived applications and make quite satisfactory amounts of money using VB5 and 6, I'd also reject the idea that the language was inherently incapable of giving you the kind of control needed to do serious real-world work.
The default solutions which the UI toolkit pointed you towards often ended up as dead ends (the data control bound to an ADO or RDO database was a particularly terrible way of managing business processes and yet it was the one which Microsoft seemed obsessed with pushing people towards) but the language offered plenty of ways to do things which weren't terrible if you took the time to work out how to do so.
Yes, if you pushed things far enough the limitations of the environment's abstractions, the fundamental sloppiness of its language design,and the choices which it decided to make for you would start to really hurt in terms of performance or ongoing maintenance. And version control was a joke, and testing and release integration were worse,and DLL hell was something which bit you on an all-too-regular basis.But you could get a pretty long way before that hurt became acute,and in the meantime you could do much that was useful and productive
with much less up-front effort than you'd need to do the same thing in any current .net language,or even in something as supposedly friendly and accessible as Python.