Hacker Newsnew | past | comments | ask | show | jobs | submit | wschroed's commentslogin

I prefer studies over anecdotes and found this: https://www.amazon.com/Accelerate-Software-Performing-Techno...

According to this study, you can measure a team's progress and ability with:

* number of production deployments per day (want higher numbers)

* average time to convert a request into a deployed solution (want lower numbers)

* average time to restore broken service (want lower numbers)

* average number of bugfixes per deployment (want lower numbers)

I am curious about other studies in this area and if there is overlapping or conflicting information.


Haven't read the linked study, but it seems like all of those metrics (except time to restore broken service) can be very easily gamed - thinking about the recent SO blog post https://news.ycombinator.com/item?id=25373353


Why would you want to game metrics that honestly can help improve your business outcome? The study is worth reading and to put it simply: the metrics seem to make sense anyway and if science proves that they do, that’s even better.


Though I posted the study, I do want to see more work in that area to repeat the observed effects or to refine the KPIs. For example, I am personally skeptical of the notion of "number of defects per release" and want to see an exploration of "amount of time spent on defects per release".

Science doesn't tell us anything (that would be authority or religion), and it doesn't prove things. It is a process that can assist us in logically determining what is NOT true about a cause-effect relationship to the point where we can make more accurate and practical predictions about those causes and effects. More experiments refine the current beliefs.


> Why would you want to game metrics that honestly can help improve your business outcome?

Q: If your manager imposed OKRs/KPIs on your team, especially if there were a financial incentive linked to achieving "results"/"performance indicators", how would you feel?

Note that what is good for the business may or may not have much to do - at least in the short to medium term - with what is good for the employees.


Definitely a reason to try and game the metrics (and probably easy too). In my experience the metrics make sense when you use them to improve as a team, I would actually try to avoid using them as OKR or KPI.


He is saying they should practice process isolation. Example architecture: https://cr.yp.to/qmail/guarantee.html


He also suggests that even with process isolation in the application layer, the database is likely not going to be isolated and you'll be breached anyway.

The qmail architecture would likely be very inefficient with 100M+ users.


Rebase is not destructive. It creates new patches based on the previous. This confusion is caused because a rebased branch tag is given a new home; however, you can always add another branch tag to the original HEAD so you can revisit it easily, and you can always find the original HEAD via the reflog. If the branch is not tagged, the default git policy is about a month before garbage collecting the untagged end.

Pre-push commit history is purely for the developer. A push is a publish, which means you are now presenting work for others to view and understand, code review, maintain, and use to debug. There is no more value to people to see every commit for a typo-fix, rollback, and redirection than there would be if our text files kept a complete and honest history of your every backspace. Nobody else wants to watch the rambling movie. BUT that movie can be important to you for a time, and can remain on your local git repo.

If the commit history is getting long in the tooth, it does help to clean it up fairly often, and I have found that cleanup for long-running branches to be much more valuable to my memory than trying to walk or bisect through every minute of my development process.

A quick note about merges: A merge should have something more interesting than "developer merged branch foo onto master"; a merge introduces something important, and old branch names may not exist in the future at all or on the same commits. If one follows this rule of thumb -- make commit messages useful -- then the fact that you were updating your branch with the latest master becomes unimportant. It is actually noise and confusion for anyone having to dig through commit histories, due to the massive number of forks. In other words, always rebase onto master first before merging.

I look at commit history as a document that I can polish and present to others. I look at my code the same way, trying to make it readable for other humans. This is writing prose. I group like things together. Each commit is a solid microfeature. There are no typo commits.



Taxi drivers more likely to die from homicide than a traffic accident! That's shocking. I wonder what explains this. Is it armed robberies gone wrong?


I think that is correct, though I'm not sure if there is anyone who tracks the exact cause, maybe it would surprise us.

Last time I heard about a violence against a taxi driver they were shot in the back in a carjacking, and then the assailant was not able to bypass some interlock device to actually restart the cab. So the cab company makes it hard to actually get away with the car, but the driver is probably carrying a lot of cash.


I wonder if they would agree to say a special word in the message. "Mention 'bananas' when you call."


I wonder if we've over-focused on the inlining vs functions-for-modularity thing here. Fundamentally, the primary objective is to express code in a way that is clear to the reader for maintenance. Secondarily, the code must work with the compiler to achieve the desired level of efficiency in speed or resource usage.

I have come to interpret this as: We are language designers. This isn't about functions. This is about building a language for the business case that is comprised of primitive expressions, means of combination, and means of abstraction. If the language is clear to the reader, expressing the problem well, it should be easier to detect problems or extend the existing language and its uses.

We are language designers already: If you build a traditional class with a bunch of methods, that is a language with how to deal with the concept embodied by the class. It must be held to the same standards of any language, DSL, or API design.

I like to remind people that we don't tend to dig into the code behind printf(); we trust what it does. We have years of experience using that primitive. It's a great example of a function that has been through many revisions due to security issues, untrustworthy in its inception. What is key here is trust that a primitive does as advertised so that one does not have to dig into its source code repeatedly. Nested functions are not an issue in the presence of trust.

My "secondarily" clause has a fatal flaw: Many languages are not suited for building languages while simultaneously not trading off speed and resource usage. The ones with pre-runtime macros/templates assist us developers in the building of expressions beyond the limitations of the base language with minimal fuss.


"This isn't about functions. This is about building a language for the business case that is comprised of primitive expressions, means of combination, and means of abstraction."

Heh. Anyone up for a Sussman&Abelson drinking game? :D


The McDonnell Genome Institute at Washington University | St. Louis, MO | Full-time | ONSITE

I am looking for a non-entry level software developer to join my Applications/LIMS team at the McDonnell Genome Institute! We are currently working on projects in the areas of cloud storage, cloud compute, high-speed data transfer, and laboratory automation. If you are interested, please search for job 33387 at https://jobs.wustl.edu/, and apply through the system. They will pass along the information, and I will email you. Naturally, I'll answer questions here, too.

The interview process is the application, a work sample test plus phone interview to cover the test, and a tour of the lab.


Amateur perspective here...

My understanding is that it is the transformation of lists of symbols. This is one step above parsing characters/runes into more meaningful primitive data. Lisp has facilities for manipulating trees of data. The book Land of Lisp makes this distinction clear in an interesting way: they implement a text adventure, and they separate the notion of words or concepts (model) from their final string representation (view). For example, you can express a phrase with something like

  '(you see exits leading (exit east) , (exit west) , and (exit north))
or

  '(you see exits leading (series (exit east) (exit west) (exit north)))
Run this through your parser for that list of lists of symbols, and you may end up with

  You see exits leading [east], [west], and [north].
where east, west, and north are highlighted if expressed on the console or converted into appropriate URLs if expressed in HTML. You can also imagine a potential here for language conversions.

Along similar lines -- with due credit to the PAIP book -- you could separate the notion of some mathematical formula from its inputs. Do manipulations, simplifications, whatever, and THEN provide inputs. This suggests that with symbolic computation and some macros, you can assist the Lisp compiler in coming up with the speediest version of some formula. Or expression. Or template output.

Not sure how to get an exact link to this comment, but look at sklogic's back-and-forth with me here for more uses of this: https://news.ycombinator.com/item?id=11589614


The McDonnell Genome Institute at Washington University | St. Louis, MO | Full-time | ONSITE

I am looking for a non-entry level software developer to join my Applications/LIMS team at the McDonnell Genome Institute! We are currently working on projects in the areas of cloud storage, cloud compute, high-speed data transfer, and laboratory automation. If you are interested, please search for job 33387 at https://jobs.wustl.edu/, and apply through the system. They will pass along the information, and I will email you. Naturally, I'll answer questions here, too.

The interview process is the application, a work sample test plus phone interview to cover the test, and a tour of the lab.


I'd like to feel like people would be ethically and morally motivated to make efforts to do the right thing rather than expect to be rewarded for doing the right thing. Perhaps it is how I was raised, but it seems weird to me that I would turn in a lost wallet with expectation to get something back out of it. This so-called "custom" is not my custom. It actually seems very childish, where one is still in the phase of learning the importance of taking care of their neighbor.


Sure, but if I had heard several news reports of people finding and returning wallets being falsely accused of theft and subjected to serious legal threats, at that point if I saw someone's wallet lying around, I would just ignore it and keep going.

The bug bounty isn't only about the money. It's also the company's way of advertising 'we aren't crazy assholes like those outfits you heard about on the news'.

(Yes, fixing the law would be a good idea. But in the meantime, a bug bounty is the solution.)


It's a bit far stretched but: You can expect people to be ethically and morally motivated or you can apply security patches to your servers.


I think I'm bikeshedding; the whole "finder's fee" nonsense bugged me. The analogy between lost wallets and servers doesn't actually hold. One can have thieves and indifference in both worlds, but the natures of the exposed items and victims are different. It is more acceptable to people -- though not any more right -- to figure that a faceless multi-million dollar corp can absorb a tiny theft/hit, but it is harder to allow pain to a relatable fellow human being. (...unless, of course, one is affected by bystander effect or pressured by authority)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: