Former academic here. That kind of stuff looks within the normal range of a Broader Impacts section. Since the 80s, if you do some obscure fundamental research, then you have to say how it's going to benefit people. Say you think there's a risk that it's not good enough to say "we will understand this natural process and there's a lot of ways that can be carried forward and then that will make it easier to figure out what to research in field X and then maybe that can be used to cure cancer or make guns." And there's always such a risk, with proposal acceptance rates being low. Then you add a sentence about how you'll also educate kids about that thing -- promising to spend a Wednesday afternoon visiting an elementary school sounds like a small price to pay for increasing the acceptance probability of a multi-year grant by 1%.
In the last few years, you had to say something about underrepresented minorities. If your university is in an urban environment where it so happens that the local elementary school is full of URMs, then you don't even need to change anything about your plan.
That is a thing in the ACM-ICPC. Typically competitors would come in with printouts of times they've implemented a tricky algorithm like the Hungarian Algorithm or the Blossom Algorithm in the past, so that having it would help jog their memory and let them be constrained only by typing speed once they've figured out how to adapt it to their problem.
TCP is great. Long chains of one-line functions that just permute the arguments really suck. These both get called abstraction, and yet they're quite different.
But then you hear people describe abstraction ahem abstractly. "Abstraction lets you think at a higher level," "abstraction hides implementation detail," and it's clear that neither of those things are really abstractions.
As the OP mentions, we have a great term for those long chains of one-line functions: indirection. But what is TCP? TCP is a protocol. It is not just giving a higher-level way to think about the levels underneath it in the 7-layer networking model. It is not just something that hides the implementations of the IP or Ethernet protocols. It is its own implementation of a new thing. TCP has its own interface and its own promises made to consumers. It is implemented using lower-level protocols, yes, but it adds something that was fundamentally not there before.
I think things like TCP, the idea of a file, and the idea of a thread are best put into another category. They are not simply higher level lenses to the network, the hard drive, or the preemptive interrupt feature of a processor. They are concepts, as described in Daniel Jackson's book "The Essence of Software," by far the best software design book I've read.
There is something else that does match the way people talk about abstraction. When you say "This function changes this library from the uninitialized state to the initialized state," you have collapsed the exponentially-large number of settings of bits it could actually be in down to two abstract states, "uninitialized" and "initialized," while claiming that this simpler description provides a useful model for describing the behavior of that and other functions. That's the thing that fulfills Dijkstra's famous edict about abstraction, that it "create[s] a new semantic level in which one can be absolutely precise." And it's not part of the code itself, but rather a tool that can be used to describe code.
It takes a lot more to explain true abstraction, but I've already written this up (cf.: https://news.ycombinator.com/item?id=30840873 ). And I encourage anyone who still wants to understand abstraction more deeply to go to the primary sources and try to understand abstract interpretation in program analysis or abstraction refinement in formal verification and program derivation.
Hey Jimmy, I've read your comment and also your article in the past with great interest. This topic is absolutely fascinating to me.
I just re-read your article but unfortunately I still struggle to really understand it. I believe you have a lot of experience in this, so I'd love to read a more dumbed down version of it with less math and references to PL concepts and more practical examples. Like, this piece of code does not contain an abstraction, because X, and this piece of code does, because Y.
I'll have to muse about what the more dumbed down version would look like (as this version is already quite dumbed down compared to the primary sources). It wouldn't be quite a matter of saying "This code contains an abstraction, this other code doesn't," because (and this is quite important) abstraction is a pattern imposed on code, and not part of the code itself.
We do have a document with a number of examples of true abstraction — written in English, rather than code, in accordance with the above. It's normally reserved for our paying students, but, if you E-mail me, I'll send it to you anyway — my contact information easy to find.
For example, in the "TV -> serial number" abstraction, if I were to define only one operation (checking whether two TV's are the same), would it make it a good abstraction, as now it is both sound and precise?
And what are the practical benefits of using this definition of abstraction? Even if I were to accept this definition, my colleagues might not necessarily do the same, nor would the general programming community
> if I were to define only one operation (checking whether two TV's are the same), would it make it a good abstraction, as now it is both sound and precise?
It would!
> And what are the practical benefits of using this definition of abstraction?
Uhhh...that it's actually a coherent definition, and it's hard to think or speak clearly without coherent definitions?
If you're talking about using it in communication, then yeah, if you can't educate your coworkers, you have to find a common language. They should understand all the words for things that aren't abstraction except maybe "concept," and when they use the word "abstraction," you'll have to deduce or ask which of the many things they may be referring to.
If you're talking about using it for programming: you kinda can't not use it. It is impossible to reason or write about code without employing abstraction somewhere. What you can do is get better about finding good abstractions, and more consistent about making behavior well defined on abstract states. If you're able to write in a comment "If this function returns success, then a table has been reserved for the requesting user in the requested time slot," and the data structures do not organize the information in that way, and yet you can comment this and other functions in terms of those concepts and have them behave predictably, then you are programming with true abstraction.
In this case, not programming with true abstraction would mean one of two things:
1. You instead write "If this function returns success, then a new entry has been created in the RESERVATIONS table that....", or
2. You have another function that says "Precondition: A table has been reserved for the user in this timeslot," and yet it doesn't work in all cases where the first function returns success
I think it's pretty clear that both ways to not use true abstraction make for a sadder programming life.
"Define errors out of existence" might sound like "make illegal states unrepresentable," it's actually not. Instead it's a pastiche of ideas rather foreign to most FP readers, such as broadening the space of valid inputs of a function. One of his examples is changing the substr function to accept out of bounds ranges.
You might be interested in my review. I'm a Haskeller at heart, although the review draws more from my formal methods background. Spoiler: his main example of a deep module is actually shallow.
Does Ousterhout actually say modules must always have a longer implementation than their spec, or just that this is a generally desirable feature?
If he did, I agree with you, he was wrong about that. I also agree that the unix file API is probably not a good example.
But whether or not he did, I think the dissection of edge cases would be better off emphasizing that he's got something importantly right that goes against the typical "small modules" dogma. All else being equal, deeper modules are good--making too many overly small modules creates excessive integration points and reduces the advantages of modularity.
P.S. While I'm here, this is not really in response to the parent post, but the example in the article really does not do justice to Ousterhout's idea. While he does advocate sometimes just inlining code and criticizes the pervasive idea that you should shorten any method of more than n lines, the idea of deep modules involves more than just inlinining code.
> Does Ousterhout actually say modules must always have a longer implementation than their spec, or just that this is a generally desirable feature?
I mean the spec is a lower bound on the size of the solution, right? Because if the solution were shorter than the spec, you could just use the solution as the new shorter spec.
Not necessarily. The implementation is very often more defined than the specific. If the implementation is the spec, then it means that even the smallest change in behavior may break callers.
> his main example of a deep module is actually shallow.
It's not, you're just ignoring what he said:
"A modern implementation of the Unix I/O interface requires hundreds of thousands of lines of code, which address complex issues such as: [... 7 bullet points ...] All these issues, and many more, are handled by the Unix file system implementation; they are invisible to programmers who invoke the system calls."
So sure, the `open` interface is big in isolation but when compared to its implementation it's tiny, which is what you've badly missed.
The book also brings up another example right after this one, that of a Garbage Collector: "This module has no interface at all; it works invisibly behind the scenes to reclaim unused memory. [...] The implementation of a garbage collector is quite complex, but the complexity is hidden from programmers using the language". Cherry picking, cherry picking.
Then you proceed to not mention all the other key insights the book talks about and make up your own example of a stack data structure not being a deep abstraction. Yes, it's not. So? The book specifically emphasizes not applying its advice indiscriminately to every single problem; almost every chapter has a "Taking it too far" section that shows counterexamples.
Just so you don't attempt to muddy the waters here by claiming that to be a cop-out, the very point of such books is provide advice that applies in general, in most cases, for 80% of the scenarios. That is very much true for this book.
Overall, your formal background betrays you. Your POV is too mechanical, attempting to fit the book's practical advice into some sort of a rigid academic formula. Real world problems are too complex for such a simplified rigid framework.
Indeed, a big reason why the book is so outstanding is how wonderfully practical it is despite John Ousterhout's strong academical background. He's exceptional in his ability to bring his more formal insights into the realm of real world engineering. A breath of fresh air.
I don't have much to say to most of your comment --- a lot of the text reads to me like a rather uncharitable description of the pedagogical intent of most of my writing.
I'll just respond to the part about deep modules, which brings up two interesting lessons.
First, you really can't describe an implementation of the Unix IO interface as being hundreds of thousands of lines.
That's because most of those lines serve many purposes.
Say you're a McDonalds accountant, and you need to compute how much a Big Mac costs. There's the marginal ingredients and labor. But then there's everything else: real estate, inventory, and marketing. You can say that 4 cents of the cost of every menu item went to running a recent ad campaign. But you can also say: that ad was about Chicken McNuggets, so we should say 30 cents of the cost of Chicken McNuggets went to that ad campaign, and 0 cents of everything else. Congratulations! You've just made Big Macs more profitable.
That's the classic problem of the field of cost accounting, which teaches that profit is a fictional number for any firm that has more than one product. The objective number is contribution, which only considers the marginal cost specific to a single product.
Deciding how many lines a certain feature takes is an isomorphic problem. Crediting the entire complexity of the file system implementation to its POSIX bindings -- actually, a fraction of the POSIX bindings affected by the filesystem -- is similar to deciding that the entire marketing, real estate, and logistics budgets of McDonalds are a cost of Chicken McNuggets but not of Big Macs. There is a lot of code there, but, as in cost accounting, there is no definitive way to decide how much to credit to any specific feature.
All you can objectively discuss is the contribution, i.e.: the marginal code needed to support a single function. I confess that I have not calculated the contribution of any implementation of open() other than the model in SibylFS. But Ousterhout will need to do so in order to say that the POSIX file API is as deep as he claims.
Second, it's not at all true that a garbage collector has no interface. GCs actually have a massive interface. The confusion here stems from a different source.
Programmers of memory-managed languages do not use the GC. They use a system that uses the GC. Ousterhout's claim is similar to saying that renaming a file has no interface, because the user of Mac's Finder app does not need to write any code to do so. You can at best ask: what interface does the system provide to the end-user for accessing some functionality? For Finder, it would be the keybindings and UI to rename a file. For a memory-managed language, it's everything the programmer can do that affects memory usage (variable allocations, scoping, ability to return a heap-allocated object from a function, etc), as well as forms of direct access such as finalizers and weak references. If you want to optimize memory usage in a memory-managed language, you have a lot to think about. That's the interface to the end user.
If you want to look at the actual interface of a GC, you need to look at the runtime implementation, and how the rest of the runtime interfaces with the GC. And it's massive -- GC is a cross-cutting concern that influences a very large portion of the runtime code. It's been a while since I've worked with the internals of any modern runtime, but, off the top of my head, the compiler needs to emit write barriers and code that traps when the GC is executing, while the runtime needs to use indirection for many pointer accesses (if it's a moving GC). Heck, any user of the JNI needs to interface indirectly with the GC. It's the reason JNI code uses a special type to reference Java objects instead of an ordinary pointer.
If you tally up the lines needed to implement either the GC or the POSIX file API vs. a full spec of its guaranteed behavior, you may very well find the implementation is longer. But it's far from as simple a matter as Ousterhout claims.
the example you quote for "Define errors out of existence", while indeed it does not follow "make illegal states unrepresentative" does follow what IMO also is an FP principle: "a total function is better than a partial one"
Your review is great! But I think the idea that it’s in opposition to PoSD is not right, I think it’s a further development and elaboration in the same direction of PoSD
This is an interesting observation! It seems like the "deep modules" heuristic has validity under it, but Darmani is looking for a more universal, rock-bottom way to define the principle(s) and their boundaries.
Darmani, is it fair to say that each interface should pay us back for the trouble of defining it—and the more payback the better? And given that, accounting for the ROI is either very complex work, or just intuitive gut instinct—as you point out in the Chicken nuggets example?
On the one hand, this is the stuff of religious wars. On the other hand, I see value in having a mental model that at least prompts our intuition to ask the questions: What is this costing? And how much value is it adding? And how does that compare to some completely different way of designing this system?
E.g., for users of certain systems, the cost of a GC may be roughly 0 as measured by intuition. I'm thinking of a system where the performance impact is in the "I don't care" zone, and no one is giving a single thought to optimizing memory management. For other users in other contexts, the rest of the interface of the GC becomes relevant and incurs so much cost that the system would be simpler overall without garbage collection.
Many other systems sit somewhere in between, where a few hot loops or a few production issues require lots of pain and deep understanding of GC behavior, but 99% of users' work can be blissfully ignorant about that.
And in many of these contexts, well-informed intuition might be the best available measurement tool for assessing costs and benefits.
> each interface should pay us back for the trouble of defining it—and the more payback the better
This seems to be the core question of this comment. I'll make the boring claim that every piece of code should pay us back for the trouble of defining it, which doesn't leave much more to say in response.
An important thing in this kind of conversation is to keep clear track of whether you're talking about the general idea of an interface (the affordances offered by something for using it and the corresponding promises) or the specific programming construct that uses the "interface" keyword. When you talk about defining an interface, you could mean either.
Another thing to remember is that, when you use "interface" in the former sense, everything has an infinite number of interfaces. For example, your fridge offers a dense map of the different temperatures and humidities at each spot. You could "learn this interface" by mapping it out with sensors, and then you can take full advantage of that interface to keep your veggies slightly fresher and have your meat be defrosted in exactly 36.5 hours. But if you get that information, there are countless ways to forget pieces and go from "the back left of the bottom shelf tends to be 35-36 F" to "the entire back of the bottom shelf tends to be somewhere between 0.5 and 2 degrees colder than the top of the fridge" down to "idk the fridge just keeps stuff cold." These are examples of the infinitely many interfaces you can use your fridge at, each offering a different exact set of abilities, most of which are irrelevant for the average user.
My review has a bit of a negative vibe, but when when I look through my paper copy of PoSD, the margins are full of comments like "Yes!" and "Well said."
This is correct, and I came to the comments to say the same thing. It takes some work to implement without arbitrary-precision floating point, but arithmetic coding can make use of the full entropy in the input stream, whereas the approach in the article discards a lot of information.
1. How do you/y'all feel about the ethics of publishing this kind of detailed instructional material for self-replicating machines on the open internet, given the potential risk of behavioral singularities from as-of-yet-uncharted intelligence explosion dynamics? I mean this question in the most friendly, least “gotcha” way possible — feel free to answer flippantly!
2. What would you say was the “frame problem” of this work that blocked progress, either recently or in your historical readings? I’m just starting to examine this literature after intentionally not engaging while designing my own systems, and it seems pretty damn robust in a post-frame-problem world. What might I be missing out of naïveté?
1. I've been a doomer since before the term was invented, and I never had any worry about synthesis research based on symbolic techniques. Gödel is our friend here: they're not powerful enough to model themselves. The best you get is stuff like https://ieeexplore.ieee.org/abstract/document/7886678?casa_t... . An important component of the Sketch synthesizer is its simplifier, and this paper used Sketch to synthesize a better simplifier, and that speeds up synthesis times by 15%-60%. But that's a one-time improvement. The exponential take off comes from somewhere else.
2. Let me see if I understand your question: I think you're saying "The frame problem of AI and philosophy, https://plato.stanford.edu/entries/frame-problem/ , was the major blocker in AI systems, and now it's been solved by deep learning. What is the corresponding problem in program synthesis?" I think your "pretty damn robust" comment is important, but haven't yet figured out what you mean.
Part of the reason I found this confusing is that the literal frame problem also appears in PL and synthesis, primarily in the verification of heap-based imperative programs. See the "frame rule" of separation logic.
So anyway, what do I see as the big challenges stopping synthesis from reaching its dream of being able to generate well-designed large programs with little input?
My two answers:
First, there's the problem of finding good abstractions. You see this in very old work:trying to prove an imperative program works is easy to automate EXCEPT for the problem of finding loop invariants. This has remained a challenging problem in the 50 years since it was first described. Craig interpolation is the classic solution, but it only goes so far. The synthesis view is: figuring out how to write loops is not straightforward because you need to summarize infinitely possible states. And that's loops; designing data structures and finding good invariants for stateful applications is harder still.
Second is what I like to call the "semantic veil." Generally, for any given chunk of code, there are multiple goals and abstractions that yield the same chunk of code. See my analogy to dairy-free chocolate chips in https://www.pathsensitive.com/2023/03/modules-matter-most-fo... . So deducing the intentions of a piece of code from just the code itself is literally impossible. Unfortunately, the way humans design programs is much better understood as being about these higher-level organizational structures, and not about the code itself. This is an area where I'm excited about the potential of LLMs: they can make excellent use of the information in priors and in natural language to guess said intentions.
Hi @Darmami, what have you been working on in the meanwhile? Given the current state of programming and compsci research, what interests you these days? (What do you see as most important 5 to 10 years down the road?) As a layperson I don't have time to grok most of this but revisit from time to time.
I mostly left academia after graduating, and am not actively following current research, although I still live in Cambridge and attend a fair number of research events. I can say that deep learning has taken over PL research, just like everything else. A good number of PL and synthesis people have shifted heavily into pure deep learning, and a large proportion of those that haven't are working either on applying deep learning and LLMs to solve PL problems, fusing PL and ML techniques (aka "neurosymbolic programming" -- the Scallop project from Mayur Naik's group is particularly exciting to me), or finding ways to use PL techniques to solve ML problems. For an example of the latter, I just flipped through this year's PLDI papers, and one caught my eye that sounds like it has nothing to do with AI, "Hashing Modulo Context-Sensitive Alpha Equivalence." (Decoding the jargon, that means "How to deal with an enormous set of programs that contain lambdas" -- something that comes up when doing search-based synthesis and superoptimization.) Its abstract ends: "We have employed the algorithm to obtain a large-scale, densely packed, interconnected graph of mathematical knowledge from the Coq proof assistant for machine learning purposes."
I think PL techniques do provide the key to overcoming a lot of the problems with LLMs. You want your LLM to have better correctness, reasoning, and goal-directed search -- researchers in programming languages and formal methods are expert at this. I am here to some extent conflating PL/FM techniques with traditional automated reasoning, but I think that's a fine conflation to make, because such techniques have been incubated by the PL/FM communities for the past 30 years since they were sidelined by AI. Case in point: very large fraction of computational logic and automated reasoning papers are motivated by problems in program analysis, verification, and synthesis.. https://easychair.org/smart-program/FLoC2022/index.html
As for what I have been up to: Trained a few hundred more software engineers, mirdin.com . While I mostly stayed away from AI stuff during my Ph. D., I've given in to the tides, and am now building a startup using my AI, PL, and pedagogy expertise to solve all problems related to codebase-learning (onboarding new hires, changing teams faster, etc).
At the very end of my Ph. D., I discovered a new program synthesis technique based on constrained tree automata, and used it to build a synthesizer which is 8x more performant on one benchmark than the previous SOTA while using 10x less code. https://www.jameskoppel.com/files/papers/ecta.pdf . So the research I've done since graduating has largely been follow-ups to that. See https://www.computer.org/csdl/proceedings-article/icst/2023/... , https://pldi22.sigplan.org/details/egraphs-2022-papers/4/E-G... . I'm currently on two collaborations. One is continuing to develop algorithms for new kinds of constrained tree automata that can synthesize more kinds of programs. The other is an outgrowth of my startup: an empirical study on existing tools for codebase learning.
Anyway, that's not a comprehensive answer on what to watch out for in the field, which I am not presently qualified to give if I ever was, but it's the things that have my attention.
Oh, but: watch Isil Dillig. Everything that comes out of her lab is good.
Supporting an old codebase is a pretty broad question. There are quite a few companies supporting COBOL.
But for migrations, and really anything interesting or unique, I recommend Semantic Designs. http://www.semdesigns.com/ . Past customers include ADP, the Social Security Administration, the Bank of Australia/New Zealand, and Goldman Sachs Australia.
They do everything, but COBOL migrations are their bread-and-butter. There are hundreds of COBOL dialects out there, and they have no trouble supporting all of them.
I did interned for them in 2016, and the CEO served on my thesis committee. But I'm not saying this because I worked there; I worked there because I thought they had the best tech.
But, looking at semdesign's site and info I have a question - which is if this tech exists since 2010 or so... why are all the big banks not using it? Is it just bad marketing or is there a gotcha? They are all whining about their cobol problem, this could be a fix......
Does this mean that the market is not what folks imagine?
But: Au contraire. It's existed much longer than that.
So you've heard of products called vitamins and painkillers?
The CEO of Semantic Designs, Ira Baxter, calls the kinds of migrations and mass changes they sell "heart surgery."
As in: "How many people go to the doctor and say 'Doctor, I think I need some heart surgery?' No one."
A very common thing that happens is that a company starts a project to upgrade some dying system and gets cold feet in the middle and then scraps the project. "We survived last year, we're surviving this year, we'll survive next year." Sometimes it takes extreme pressure, like the compiler for the original language no longer existing, before they migrate.
(And it's not just the really big changes. A friend of mine, who founded another program-transformation startup, discovered this when he landed a deal using his tool to upgrade some company from PHP 4 to PHP 5. They decided that they needed way more code review than the kinds of trivial changes being made actually should require, decided not to do it, and then wound up not doing the upgrade.)
When Ira begins talking to a new customer, he asks two questions:
1. How long have you been in your job?
2. Have you tried to do this kind of thing before?
If the answer to #1 is too long, then he knows that either the manager he's talking too is risk-averse and complacent, or that he'll be promoted to his next job in the middle of the project. To make a good deal, he needs his counterpart to be someone who wants this kind of upgrade to be their big initiative that will help them rise.
If the answer to #2 is no, then he'll know that eventually talks will break down and then they'll try to do it manually or in-house by underestimating how hard it is. For their most famous project, the B-2 Stealth Bomber migration, Northrop Grumman did exactly that -- came to Semantic Designs after a failed manual migration and a failed in-house tool development effort.
So, yeah -- it's a very tough market. The buyers don't behave like you think they should.
I wrote about this some in an appendix to pathsensitive.com/2023/09/its-time-for-painkillers-vitamins-die.html
That's very relevant to generate-and-test program search.
The SBSE (search-based software engineering) and APR (automated program repair) communities do that kind of stuff. E.g.: There's a paper from that world, which I can't presently find, on "super mutants": basically, use environmental variables as feature flags so that you can compile 100 variants at once. Back when I worked in this space, one of the best things I did was run the Java compiler from within nailgun (program that saves JVM startup time) to compile mutants faster.
Symbolic program synthesis doesn't work like this. They effectively has its own compiler toolchain that transform programs into constraint systems and/or do partial evaluation of programs with wholes. Most of them never run a traditional compiler; they just guarantee that they output correct code.
They sued a friend's company as well. He wrote very vividly about what it's like to be on the receiving end.
I've stayed silent for too long. And I'm not going to stay silent anymore.
A few weeks ago we got sued by Momofuku. It was one of the most painful experiences of my life. After getting IBS, I decided to leave my six-figure job in Silicon Valley, to venture into the Biotech and Food-as-Medicine world in hopes of finding a solution. I gave up financial stability, relationships, and sanity to get Redbloom off the ground. I still remember the day Redbloom was 3 days away from dying, and on the day I was supposed to shut down the company, we went viral. Hundreds of orders flooded over the holidays and I had to juggle between pissed customers and fundraising. This was winter.. funds were closed, everyone told me we weren't going to close the round until march. We closed the round 1/15. From Thanksgiving all the way to Lunar New Year. I didnt' see my family, I didn't see my friends. I slept on the factory floor. I made my ops team work 15 hours a day to make the facility expansion deadlines. Redbloom has grown beyond my control to the point it became impossible to find a co-founder - I was a single parent. That legal letter from Momofuku felt like a knife being held to my kid's throat. The worst part about it was we couldn't even speak up about it - were we the only ones? do we have a target on our backs before we can even walk? Everyone close to me was like "what did you do?"
The funny thing is - we don't even compete against Momofuku. People with IBS can't eat Momofuku. I'm here trying to create medicine to reduce gut sensitivity, but instead of making a pressed pill, our medicine form factor is chili oil (for you science nerds, we microencapsulate capsaicin with oleic acid to provide a cushion for the TRPV1 receptors in your gut so prevent autoimmunity in the short run, allowing the capsaicin to reduce visceral hypersensitivity in the long run). Yet, David Chang still found a way to eat away from our R&D budget required for clinical studies...
Today, we found out that they are suing multiple asian food founders for various things. This has gone beyond a simple lawsuit. A hole is being torn in the AAPI community right now. AAPI founders are supposed to support each other, not fight each other over centuries of culture that spans beyond our generation. This is no longer about me, it's no longer about Redbloom. It's bigger than that now. The articulation of our asian heritage in the western world is at stake. And we will stay silent no longer.
Homiah - Michelle Tew
MìLà - Jennifer Liao, Caleb Wang
And all the other AAPI founders who we experiencing this. We are here to stand by you! We are here to speak up. and We are here to support the AAPI community.
As someone with a biomedical background, the “science” explanation isn’t giving me much faith either. They should have just given a DOI to a relevant scientific paper instead.
My house is two different AAPI backgrounds and we love to enable other AAPI business. We'll share your stuff among our friends and community. Good luck!
Your friend should realize that a basic GAPS diet would resolve 99.9% of their IBS issues. Literally throw some chicken and bone broth in an instant pot and eat that daily. A little salt. That's it. Do it for a year. Hell, do it for a month.
When your gut health bounces back, start to mix it up again.
" I didnt' see my family, I didn't see my friends. I slept on the factory floor."
Oh, and maybe stop doing that too. Chronic stress equals chronic inflammation equals IBS and an array of other issues.
To people who are experienced in writing out the full interface of modules, ie their assumptions and guarantees, it's quite clear that being a "deep module" in Ousterhout's sense is quite rare and often undesirable, and that Ousterhout's examples of deep modules are actually shallow. See https://www.pathsensitive.com/2018/10/book-review-philosophy...
He gets a lot of other stuff right though. Love his writing on comments
I think I might be missing something from that article, but I don't think they're in so much disagreement with Ousterhout.
Ousterhout's core argument is this: consider a module as its interface (measured, say, in # of edge cases) multiplied by its effect (measured, say, in # of features achieved). When comparing two modules, we should prefer modules with smaller interfaces and larger implementations, i.e. of two modules with equally sized interfaces, prefer the one which does more, and of two modules which do the same thing, choose the one with the simpler interface.
Narrow vs wide in this context seems to me to mainly be a point of comparison - in practice, narrow and wide don't make much sense when talking about a module on its own, because the dimensions of interface and implementation are not comparable.
However, this article seems to mainly be about comparing interface to implementation and arguing that, because interfaces can be very complex, often as complex or more as the implementation itself, then Ousterhout is wrong. Or in other words, if we convert "# of edge cases" and "# of features achieved" to a single comparable unit like "# of lines of code", then for Ousterhout's advice to hold, then the implementation must be more lines of code than the interface.
To me, that's kind of missing the point of the advice, which is less to do with the amount of code needed to implement something, and more to do with the capability of code vs its interface. Yes, lines of code can be a useful proxy for both dimensions, but the dimensions aren't meant to be directly comparable.
> To put this another way, each bomb can destroy an area of 34.2 square miles, and the maximum total area destroyed by our nuclear apocalypse is about 137,000 square miles, approximately the size of Montana, Bangladesh or Greece.
(I think that should be Bangladesh and Greece; Montana is larger than the two of them combined.)
In the last few years, you had to say something about underrepresented minorities. If your university is in an urban environment where it so happens that the local elementary school is full of URMs, then you don't even need to change anything about your plan.
reply