It is true that the underlying technology used to write the code to begin with should be less forgiving. If you use a strictly typed, compiled language instead of PHP, you would have no choice but to fix a lot more of the errors because it would not compile otherwise.
Once it is running on production though, things are quite different. You need the right combination of errors being well reported and gracefully handled without aborting or breaking the rest of the functionality unnecessarily. At that point people are relying on it to get their jobs done and they will usually find ways to work around the errors and even the corrupt data this might result in so they can keep meeting their deadlines while the programmers work on fixing the problem. This is much better than those same employees not being able to do their jobs or getting payed to stand around and do nothing. I guess this attitude is largely driven by the practicalities of where I work. If the employees that rely on the code to work get behind or can't complete their work on time, our company is nailed with thousands of dollars in fines as per the contract agreements we have to agree to in order to get the business/contracts to begin with, and then our customers can't bill their customers, so they are not happy.
Even if you can reason about the code enough to come to a conclusion that seems like it must be true, that doesn't prove your conclusion is correct. When you figure something out about the code, whether through reason and research, or tinkering and logging/monitoring, you should embed that knowledge into the code, and use releases to production as a way test if you were right or not.
For example, in PHP I often find myself wondering if perhaps a class I am looking at might have subclasses that inherit from it. Since this is PHP and we have a certain amount of technical debt in the code, I cannot 100% rely on a tool to give me the answer. Instead I have to manually search through the code for subclasses and the like. If after such a search I am reasonably sure nothing is extending that class, I will change it to a "final" class in the code itself. Then I will rerun our tests and lints. If I am wrong, eventually an error or exception will be thrown, and this will be noticed. But if that doesn't happen, the next programmer who comes along and wonders if anything extends that class (probably me) will immediately find the answer in the code, the class is final. This drastically reduces possibilities for what is possible to happen, which makes it much easier to examine the code and refactor or make necessary changes.
Another example is often you come across some legacy code that seems like it no longer can run (dead code). But you are not sure, so you leave the code in there for now. In harmony with this article, you might log or in some way monitor if that path in the code ever gets executed. If after trying out different scenarios to get it to run down that path, and after leaving the monitoring in place on production for a healthy amount of time, you come to the conclusion the code really is dead code, don't just add this to your mental model or some documentation, embed it in the code as an absolute fact by deleting the code. If this manifests as a bug, it will eventually be noticed and you can fix it then.
By taking this approach you are slowly narrowing down what is possible and simplifying the code in a way that makes it an absolute fact, not just a theory or a model or a document. As you slowly remove this technical debt, you will naturally adopt rules like, all new classes must start out final, and only be changed to not be final when you need to actually extend them. Eventually you will be in a position to adopt new tools, frameworks, and languages that narrow down the possibilities even more, and further embedding the mental model of what is possible directly into the code.
In Kotlin classes and methods are closed by default, making avoiding inheritance and polymorphism the default. It also makes you mark each variable as mutable or immutable, by declaring it with a var or val, making you think more about (and enforce) state being mutable or immutable, thus avoiding a lot of state problems. It also has a concept called data classes, which works really well for most well designed classes. I now consider the case where a class is not a good candidate for data classes to be a code smell. Most classes should be a good candidate for data classes or else you are usually using classes in a way that causes more problems than it solves.
The thing is, it doesn't prevent you from creating classes or methods that are open for inheritance, nor does it prevent you from allowing mutable state, but it does use the defaults and the language design to encourage limitations that are usually good.
Perhaps they could take these concepts a step further by making classes data classes by default, having a keyword for it to not be a data class.
The main benefit of classes is as a way to define custom types. Otherwise you are stuck with the types built into the language, which are not designed to help ensure correct program state, logic, and behavior. When you use classes in this way, I call it type oriented programming (TOP). Why would I compare classes to types? Just pass in the initial value at construction time, define the "operations" on that type by creating methods for your class, and define which other types those operations work with by setting the types of parameters those methods work with. Make the class immutable, always returning a new instance with the new updated value passed into the constructor. Why did I mention these types helping the program be more correct? This should be used as the primary form of contracts. But these contracts are very simple to use, you just specify the relevant type in the parameters and return values, and you are done defining the contract for any given method. For example, let's say a method should only work with integers larger than zero, instead of either accepting an Int or an UnsignedInt, which would allow 0, you could define your own class called PositiveInt. It would be designed to throw an exception if you pass anything smaller than 0 into the constructor. Then instead of writing code inside the method that makes sure the user of the method is following the contract, you just specify PositiveInt as the parameter type. If the contract is violated, the exception will be thrown as early as possible, before the method is even called, helping programmers catch the original source of the problem. This also makes your code more readable, because you can see exactly what each method accepts and returns just by looking at the method signature. When you start thinking this way, you will notice many core types are missing from the language, that should have been there from the beginning. Fortunately you now know how to build them yourself.
> The main benefit of classes is as a way to define custom types. Otherwise you are stuck with the types built into the language
I'm going to stop you right there, because plenty of languages (especially functional ones) have ways of declaring custom complex types without using classes.
This will require that any object used as a DuckTypedObject must have the `quack` and `eyes` properties and may optionally have a `bark` of `false`, but doesn't otherwise prescribe what the object actually has to be.
Sorry to be pedantic but that isn't quite right. I do use the same technique myself quite a lot with the options pattern.
However it is worth keeping the following in mind. The compiler will check that in your code that these properties are present and assigned correctly. However at runtime nothing is guaranteed especially when dealing with the DOM.
One of the things that I don't like about TypeScript (I have written a fair bit of it) is that you believe you have type safety when it is really type hinting.
Yes. However you can for example do the following in typescript:
function someFunction(obj?:DuckType) {
//Snip other logic
someOtherFunction(obj);
}
function someOtherFunction(obj:DuckType) {
}
The type checker catch that. So you still have to do:
function someFunction(obj?:DuckType) {
obj = obj || { /* set some object properties */
//Snip other logic
someOtherFunction(obj);
}
function someOtherFunction(obj:DuckType) {
}
I have found plenty of examples where people haven't set a default value because the compiler hasn't flagged anything wrong with the code and you get an uncaught reference error.
Which languages would you say are really good for defining your own types? Would they be a good fit for the example I provided where a function needs to accept integers larger than zero? Do they also allow you to define your own operations on those types? That is the part where classes seem to be a good fit, methods are basically operations supported by a type.
> Which languages would you say are really good for defining your own types? Would they be a good fit for the example I provided where a function needs to accept integers larger than zero?
I'm not an Ada expert, but it has excellent support for range-restricted integer types.[0]
Ada's 'discriminated types' are also fun. They let you create members which only exist when they're applicable. [1]
> Do they also allow you to define your own operations on those types
Looks like Ada supports operator overloading, yes. [2]
Ada is also great at separating the various aspects of OOP into separate language concepts, instead of going "everything is done with classes!!" as many popular languages do, leading to a lot of confusion in this thread.
A typical way to handle this in a functional language would be to create a datatype with a name like "PositiveInt", which just contains an Int inside it. However, in a language like OCaml, you can make it so that users of this type cannot directly create it, and instead must use a function like "makePosInt", which would check that its argument is positive, then give you back a value of type PositiveInt containing your data.
I'm not too experienced with this though, so this is pretty much the extent of my knowledge on this topic.
Then give it a constructor (which in rust is just a regular static function) that checks the non-zero variant. You can define method on this type, and make it implement traits (which are kind of like interfaces).
The best bit: there is zero runtime cost to this, the memory-representation of this type is identical to that of the underlying u32.
Rust-style enums which can contain data, and also have method implemented on them are even better. Doesn't mean classes (structs in Rust) aren't useful, but once you use a language that allows you to define other kinds of custom types, they seem very restrictive when they're the only available tool.
Yep. This whole discussion just seems like a C vs C++ styleguide slapfight.
Having the functions that operate on the struct attached directly to the struct declaration, vs having some functions that the first parameter is the struct on which the function operates, doesn't seem like a particularly meaningful distinction to me. OK, you like C-style programming in favor of C++-style programming, congrats. It's still a class either way.
The distinction you describe is not meaningful, but the key feature that separates classes from other forms of code organization/polymorphism, like typeclasses as in Haskell/Rust, is not that. It is inheritance.
I guess it's kind close to a C++ class. It's pretty different to a class in a language like Java, because all classes in Java are heap allocated and behind references.
Enums are are the better example of non-class types. For example, you can have:
enum StringOrInt {
String(String),
Int(u32),
}
And you can go ahead and implement methods on that type. Classes have "AND-state", not "OR-state". But a Type in general can have either kind of state.
That's equivalent to wrapping an enum in a class. Emulations of type hierarchies without OO often fail like this, having a A-or-B be literally the same type so losing out on type safety/forcing constant rechecking of the discriminant.
You actually don't need types or classes to do this. You could use design by contract, which is what I've done in Python and Perl, neither of which have very fancy type systems compared to something like Haskell.
With design by contract you can put in whatever fancy constraints you want on function parameters and return values, and those will be enforced.
As far as objects go, they're much more useful for me as just a means of passing state. Rather than using a bunch of global variables or having to pass in a ton of function arguments, I can just use an object which contains all the state I need.
Of course, having lots of state can usher in its own set of problems, and there's something to be said for trying to make your code as stateless as possible. But sometimes you need state, and maybe even a lot of it.
I use C structs in the same way. I think it's fine to use classes for this purpose. The problem with classes is that certain people go crazy with inheritance and polymorphism, things which sound smart on paper but almost always lead to horrific unmaintainable code in practice.
This is a good point. Right now Kotlin is my favorite language. By default classes and methods are closed for inheritance, and therefore for polymorphism. I like the way this is the default, but you can override it when/if necessary. It also has data classes, which in my opinion is what most classes should be. I think when a class is not be a good fit for data classes, this is a code smell.
An excellent article on type-driven design and development was posted on HN [1][2], as darkkindness put it "Encode invariants in your data, don't enforce invariants on your data"
> For example, let's say a method should only work with integers larger than zero
I dislike this example. The numeric systems of every programming language I've ever used has been (more or less) terrible, precisely because there are extremely common and simple arithmetic types, just like this, which it's terrible at representing. Half of the "modern" languages I've seen just provide the thinnest possible wrapper around the hardware registers ("Int32"!).
(What if I need to accept an integer in the range 5 < x < 10 instead? Am I supposed to define a new class for every range?)
Instead of saying we need a system of user-definable classes so every user can fix the numerics on their own, for each program they write, I'd say we should fix our numeric systems to support these common cases, and then re-evaluate whether we really need classes.
Are there non-numeric analogues to this type of value restriction? Maybe. It doesn't seem like a very common one, but it is an interesting one. Perhaps what we really want is "define type B = type A + restriction { invariant X }". I can't think of any examples offhand in my own work, but that could be because I haven't had that language feature so I wasn't looking for places to apply it.
One being the idea of types as describing shape of data. In the best of cases perhaps some semantics tied to the data (how the bits are to be interpreted)
Than there is the the other view, the Curry-Howard one. Where types describes proofs and invariants of the program it self, and how interesting properties of program compositions can be ensured.
It seems much time is wasted when people holding one perspective debates with people holding the other.
Perhaps we should have separate words for these concepts.
Not only were custom types invented decades before classes, but the languages with the strongest emphasis on type safety tend to either lack them or consider them to be unidiomatic.
Which languages would you say are really good for defining your own types? Would they be a good fit for the example I provided where a function needs to accept integers larger than zero? Do they also allow you to define your own operations on those types? That is the part where classes seem to be a good fit, methods are basically operations supported by a type.
For the integers larger than zero example, I'd say the earliest archetype for a good design (that I know of) is Ada, which lets you declare a type with a limited numeric range like so:
type MyType is range min .. max
There's already a built-in for positive integers, which is defined as
subtype Positive is Integer range 1 .. Integer'Last;
Note the subtype there. Ada recognizes that a positive integer is a type of integer, but not the other way around. And it enforces that in the type checking: You can pass any Positive into a function that accepts Integer, but you can't just pass an Integer into a function that accepts Positive. This happens even though they're not classes and this isn't OOP. Ada does have object-oriented constructs, but they are a later addition to the language. I have never used Ada professionally, but my understanding (based on book learning) is that it tends to be used conservatively.
It's similar in OCaml. Despite the O standing for "object-oriented", creating classes isn't necessarily considered idiomatic. The other tools in the chest tend to be conceptually simpler, and therefore to be preferred when they will get the job done.
"define your own operations" is a requirement I'm having a hard time making sense of. To me, that is just another way of saying, "define functions", which is a feature of every language I've used except for one really ancient dialect of BASIC.
You can go with more of a Haskell/Rust style approach, where you declare a type, and then define operations using either plain functions, or using something kinda similar an interface (typeclasses in Haskell, traits in Rust).
In my experience, custom types often turn out either being so restrictive as to be less than useful or being leaky abstractions. (This might be an example of just where a healthy engineering compromise is unavoidable.)
At first I thought oh I guess the symptoms aren't as awful then I realized you interpreted the title differently from me... Wow yeah I guess they hit the title length limit for HN and had to get creative.
I mostly agree with you. I think the key is to always keep improving the code, don't expect or try to achieve perfection, and keep refining your concept of what makes for "improved" code.
So if you come across a method name that does not clarify enough for you what it does, and there are no comments to help, you might start by getting your questions answered, and then add comments that help clarify things. If you stop there, you have done your part, you have improved the code. The next person that comes across it, even if that person is you, is likely to have a better experience than you did. After that, if you still have more time and energy to improve it further, can try to find a better name for that method. After renaming it, you might discover the comment is no longer adding clarity, just duplication. At that point, removing the comment would be further improving the code. But if removing the comment makes it less clear, and if in your opinion the level of clarity added by the comment is worth its weight in screen space and developer read time, then removing it would be making the code worse, so you should leave it there.
I agree with what you say about context in that context is one of the most common things to not be able to express clearly enough in the code itself. However, you can still attempt to do so. An unhelpful comment is what it is, regardless of if it is expressing context or not. The same goes for a helpful comment. Sometimes a comment is helpful when the missing context is why a strange design decision was made in the code, or an unintuitive business problem users deal with that is solved by the logic in the code.
> So if you come across a method name that does not clarify enough for you what it does, and there are no comments to help, you might start by getting your questions answered, and then add comments that help clarify things.
And the logic for not renaming the method is what exactly?
I'm not saying you should not rename it instead. I'm just suggesting that there may be times when placing a helpful comment is easier than renaming it. So you might start by adding a comment, and then consider if you can rename the method to not need the comment. But you are right that if there is an immediately obvious way to rename the method to add clarity, you could just do that and be done.
Yeah, I would have recommended Kotlin. Runs on the JVM, so benefits from the matureness of Java, but is a more modern, and better designed language. A reason to choose Rust over this is if you actually have reason to believe you will need some extra bare metal performance over a garbage collected language, or you will need multiple threads. From what I can tell, the author did not really have good reason to believe any of those things, they just wanted to stay clear of whatever was wrong with the software they used at work, which a JMV based solution should have done. I am glad it worked out for them though, and knowing Rust is a good asset to fall back on, whereas learning Kotlin should be pretty easy for them. Even so, I like that with Kotlin you can take advantage of the most mature IDE ever made InjelliJ IDEA. I think they would have completed their project faster with this. That being said, the program they ended up with sounds very fast and it probably is faster than if they wrote it in Kotlin. It was the "right" move in the sense that it seems to have worked out well for them. Was it the "best" move? Most moves are not, and they don't need to be. Whether or not Kotlin would have been better probably depends on if it was better to finish faster or have an even more blazingly fast program at the end. I don't think a Kotlin version would have been slow or had any of that wait for 30 seconds nonsense, but it probably would have been a little slower. Would that a "little slower" have mattered? It depends on who is using the program and how they are using it. It always feels nicer to have a program that feels like instantly responds to your every whim.
I prefer to just do screen sharing with an audio conversation going to maximize the learning and flexibility. Since I also like Sococo, I prefer to just stick to that, since that is one of the features it provides. I will now explain why.
I disagree with what was said in the video about how there are many reasons you don't want to share your entire screen. With a little care, it is actually reverse, there are many things you are missing out on if you are not sharing your entire screen. I will explain this more in a moment. I also disagree that both people being able to edit the code is a good thing. I will explain more about that in a moment. Admittedly there are situations where you actually want to control the other person's computer, but you can't have both the benefits of being able to control their computer and the benefits of not being able to, so I prefer to not be able to.
On screen sharing. The reason I prefer to share my entire screen is basically this is more flexible. There inevitably ends up being other windows and programs that have some relevant information I want to show them. It might be a web browser or something else. Prior to sharing my screen I reduce the number of monitors I'm using to 2, and I increase my font size for my editor and browser.
On computer control. I recently read an article about strong style pairing. It really resonated with me because it reminded me of something I either read or heard, can't remember which, about the current recommended way of doing mob programming. I think the main benefit is it maximizes learning. The concept is the same for both types of collaborative programming, but for some reason I never considered that it could also work with pair programming as well. Basically, you have your driver and your mapper. The driver is the one with their hands at the keyboard and mouse. The mapper is the one who tells them what to do. The mapper explains what to do at the highest level of abstraction that they both understand, and they increase the level of detail as necessary if the driver does not understand them. The driver is supposed to trust their mapper. This takes a lot of discipline to work this way, but if you are just doing screen sharing, you really don't have a choice, which makes it easier to force yourself to stick to this.
Here are the advantages of strong style pairing. It allows the mapper to keep their head on the bigger picture, while simultaneously still being the one who is really in control. Traditionally the driver would be in control, and this would free up the mapper to think on a higher level. But then it is easy for the mapper to get distracted, and there is always this nagging feeling it is not really worth having two people at the same computer. In this way, the driver is like a human interface the mapper can use to control the computer. It is as if they can talk to the computer and tell it what to do! By not having to focus on actually making the edits, they are able to plan ahead and keep more of the plan in their head. This allows them to always be ready to give the driver the next instruction as soon as the driver executes their previous one. Since the mapper is in control, they need to know how to do what they want to do. Therefore the editor and language that the driver will use, is the one the mapper wants to use. This is great because the driver can quickly pick up new editors, IDEs, and languages fairly quickly, just by driving with an expert mapper for a few days. For the same reason this makes strong style pairing work great when you want to pair experts with people at a lower skill level, which might be more frustrating otherwise. In that case, the expert is normally the driver. If the driver really has an idea they want to try out, you would switch roles for a time rather than the driver taking control.
Once it is running on production though, things are quite different. You need the right combination of errors being well reported and gracefully handled without aborting or breaking the rest of the functionality unnecessarily. At that point people are relying on it to get their jobs done and they will usually find ways to work around the errors and even the corrupt data this might result in so they can keep meeting their deadlines while the programmers work on fixing the problem. This is much better than those same employees not being able to do their jobs or getting payed to stand around and do nothing. I guess this attitude is largely driven by the practicalities of where I work. If the employees that rely on the code to work get behind or can't complete their work on time, our company is nailed with thousands of dollars in fines as per the contract agreements we have to agree to in order to get the business/contracts to begin with, and then our customers can't bill their customers, so they are not happy.