Hacker Newsnew | past | comments | ask | show | jobs | submit | more rakejake's commentslogin

IMHO Inheritance (especially the C++ flavored inheritance with its access specifiers and myriad rules) has always scared me. It makes a codebase confusing and hard to reason with. I feel the eschewing of inheritance by languages such as Go and Rust is a step in the right direction.

As an aside, I have noticed that the robotics frameworks (ROS and ROS2) heavily rely on inheritance and some co-dependent C++ features like virtual destructors (to call the derived class's destructor through a base class pointer). I was once invited to an interview for a robotics company due to my "C++ experience"and grilled on this pattern of C++ that I was completely unfamiliar with. I seriously considered removing C++ from my resume that day.


To me, inheritence makes sense if you view your codebase as actual "Objects"

The reality is that a codebase is not that simple. Many things you create are not representable as realworld "objects" - to me, this is where is gets confusing to follow especially when the code gets bigger.

I remember those OOP books (I cannot comment on modern OOP books) where the first few chaptors would use Shapes as an example. Where A Circle, Square, Triangle, etc.. would inherit the Shape object. Sure, in simple examples like this.. it makes sense.

I remember covering inheritence and how to tell if its better or composition... which is the "Object IS X" or "Object HAS X" - so you base you're heirarchy around that mindset.

- "A Chair is Furniture" (Chair inherits Furniture) - "A Chair has Legs" (Chair has array of Leg)

I will always remember my first job - creating shop floor diagrams where you get to select a Shelf or Rack and see the visual representation of goods, etc. My early codebase was OOP... a Product, Merchandise, Shelf, Bay, Pegboard, etc. Each object inherits something in one way or another. Keeping on top of it eventually became a pain. I think there was, overall, about 5 levels of inheritence.

I reviewed my codebase one day and decided to screw it -- I would experiment other approaches. I ended up created simple classes with no inheritence. Each class was isolated from one another with the exception of a special Id which represented "something" like a Pin, or Shelf, etc. Now my code was flexible... "A Shelf has this and this"

In later years I realised what I did was following along the lines of what is commonly known as ECS or Entity-Component-System. Seems popular in games (and I viewed that project is a game-like fashion so it makes sense)


I’m not on the cutting edge of gamedev, but I still believe that ECS is a solid pattern with lots of use cases.


I do make games at home and dont follow ECS religiously, but it is there depsing on how I solve a problem.

:-)


Sounds like relational databases: Entities are IDs. Components are tables with an ID column.


EXACTLY!

It is very much like building a database.

Each class I created is very much like creating a table.

I had my "Objects" which was a simple class with an Id. In ECS land, this would be better known as an entity.

I would then create classes (or tables) for each "feature" to support.

A feature could be

Is it Solid? Is it Visible Is is a Shape/Model Does it has Position Does it have Children

etc.

Each feature an object supports gives it extra data. So each feature is essentially a table with an Id, ObjectId, and additional fields.

Basically, I am "creating my object hierarchy" at runtime, not at compile time with OOP methods. This made it sooo more flexible when more Companies wanted to use the software, especially with their unique approaches to shop management. All configurations were in XML files -- much better than trying to change an OOP hierarchy to suit ALL companies rulesets.

This is going back a few years, now. Its amazing what comes back to memory.. how I wrote most of this entirely in Javascript to eventually moving to a backend language using AJAX.. to simplifying code with jQuery.

Fond memories.


To be fair, deleting a derived object through a base class pointer is pretty basic C++. Slicing and virtual destructors are usually the first couple of things you learn about after virtual methods and copy constructors/assignment.


Quite a few sections of C++ can be classified as "pretty basic C++". None of the rules are complicated in isolation but that doesn't necessarily make it easy to reason about it.


Amen.


Interesting. So they convolve the k,v, q vectors? I have been trying the opposite.

I have been working on a classification problem on audio data (with context size somewhere between 1000 and 3000 with potential to expand later). I have been experimenting with adding attention onto a CNN for a classification task I have been working on.

I tried training a vanilla transformer but in the sizes that I am aiming for (5-30M parameters), the training is incredibly unstable and doesn't achieve the performance of an LSTM.

So I went back to CNNs which are fast to train but don't achieve the losses of LSTMs (which are much slower to train,and for higher context sizes you get into the vanishing gradient problem). The CNN-GRU hubrid a worked much better, giving me my best result.

The GRU layer I used had a size of 512. For increasing context sizes, I'd have to make the convolutional layers deeper so as not to increase the GRU size too large. Instead, I decided to swap out the GRU with a MultiHeadAttention layer. The results are great - better than the CNN-GRU (my previous best). Plus, for equivalent sizes the model is faster to train though it hogs a lot of memory.


What codec were you using for the audio data?


Burmese Days is my favorite Orwell. It captures the state of the colonies under British rule extremely well - the British new-arrivals who are derisive and at best ignorant of the local customs, the British who've grown up in the colony and are more sympathetic but stuck in a world where they'll never truly belong, the extremely corrupt rich locals, the poor mob who are easily misled, the endemic corruption in everything...

This novel could have been set in post-independence India and a lot of the themes would have rung true.


> novel could have been set in post-independence India and a lot of the themes

The colonialists did, but the baggage never left. Turns out oppression/corruption/subjugation/feudalism/communalism/obscurantism/cronyism is politically and/or economically profitable. The current ruling class learnt well from their predecessors.


I've read 1984 as a poorly-drawn frame story meant to convey* Orwell's sincere argument, in the form of Theory and Practice of Oligarchical Collectivism, which baldly states that even without colonialism in the picture, you ("the Middle") probably will still have to live with the extremely corrupt rich locals ("the High") and the poor mob who are easily misled ("the Low"), etc. etc.

* or at best to provide a motive filling in the implicit blank left by TaPoOC's truncation: "deeper than this lies the original motive, the never-questioned instinct that first led to the seizure of power and brought DOUBLETHINK, the Thought Police, continuous warfare, and all the other necessary paraphernalia into existence afterwards. This motive really consists..."


ПЖ would claim later that the oppression and continuous warfare are merely unfortunate side effects! (Joke?)

The motive consists simply of helping society be more effective!!*

(Riffing on 2-ish Nussbaum-Sen derived questions:

1. How to hand out capabilities to the effective/viral but not necessarily wise.[0]

2. How to immediately extract the foremost interesting issue with any framework... that some undergrad can fix before you even sit down at your studydesk in your chalet tonight.

*at learning/earning as a team. "too much information" is consensually the thing to insure against in the Overton; other things detractors ("prigs"/殺氣者) warn you about are to be reflexively dismissed as acts/will of god(s)

**Otoh info asymmetry foments corruption. "moral hazard" is translatable to irl outcomes.

***not touching the CFG analysis directly atm, indeed, because those tractability monsters pull you under when you forget to focus on interestingness

https://aarushgupta.com/2024/03/30/procrastination

[0] S-N's "thresholds" I read as a placeholder for a thermodynamically grounded app of "festina lente". Colloquially: When your heat baths are way too mismatched, your Carnot efficiency...


Nowadays, even the definition of an "epoch" is not well defined. Traditionally it meant a pass over the entire training set, but datasets are so massive today that many now define an epoch as X steps - where a step is a minibatch (of whatever size) from the training set. So 1 epoch is a random sample of X minibatches from the training set. I'd guess the logic is that datasets are so massive that you pick as much data as you can fit in VRAM.

Karpathy's Zero To Hero series also uses this.


A 7k param LSTM is very tiny. Not sure if LSTMs would even work at that scale although someone with more theoretical knowledge can correct me on this.

As an aside, I'm trying to train transformers for some classification tasks on audio data. The models are "small" (like 1M-15M params at most) and I find they are very finicky to train. Below 1M parameters I find them hard to train at all. I have thrown all sorts of learning rate schedules at them and the best I can get is the network learns for a bit and then plateaus, after which I can't do anything to get them out of that minima. Training an LSTM/GRU on the same data gives me a much better loss value.

I couldn't find many papers on training transformers at that scale. The only one I was able to find was MS's TinyStories [0], but that paper didn't delve much into how they trained the models and whether they trained from scratch or distilled from a larger model.

At those scales, I find LSTMs and CNNs are a lot more stable. The few online threads I've found comparing LSTMs and Transformers had the same thing to say - Transformers need a lot more data and model size to achieve parity and exceed LSTMs/GRUs/CNNs, maybe because the inductive bias provided is hard to beat at those scales. Others can comment on what they've seen.

[0] - https://arxiv.org/abs/2305.07759


I don't have much help to offer, but just to echo your experience... at my group we have tried to train Transformers from scratch for various NLP tasks and we always have been hit with them being extremely brittle, and BiLSTMs working better. We only succeeded by following a pre-established recipe (e.g. training a BERT model from scratch for a new language, where the architecture, parameters and tasks are as in BERT), or of course by fine-tuning existing models, but just throwing some layers at a problem and training them from scratch... nope, won't work without arcane knowledge that doesn't seem to be written anywhere accessible. This is one of the reasons why I dislike Transformers and I root for the likes of RWKV to take the throne.


I think the "arcane knowledge" is true for LLMs (billions). But there are lots of people who train models in the open in the hundreds of millions realm, but never below. Maybe transformers simply don't work as well below a size and data threshold.


The article is co-written by a member of Kerala's Planning Board and heavily oversells the state. I'd say Kerala is nowhere near rich so even the title is technically incorrect.


When COVID was spreading around china the state govt was putting out public announcements about this disease and what symptoms to watch out for. I remember that even in the month of Feb 2020 there were public announcements in train stations. There is a lot of emphasis on education and health in the state. Granted it may not be rich as other states but it leads other states in a lot of other markers


Kerala is rich by Indian standards, but its GDP per capital is still only around $3500/year, making it nowhere near rich by world standards.


Even by Indian standards, it’s #11 by GDP per capita among 27 Indian states. I guess top third is not bad, but not exceptional either.


That may be but the topic of the thread is how rich Kerala supposedly is, not how super awesome their public train announcements are. The claim is not just false, the article is outright propaganda given how one of the co-authors works for the state government.


I guess my main point is that a communist type govt was not exclusively bad for Kerala since they took a lot of effort to improve education and public health.

You can look at other sources to see how good kerala is doing wrt other states but I do agree the article over emphasised the good parts without any hint to it's bad parts


To put this in context, during Covid, hundreds of bodies were being dumped in the Ganges river, buried in shallow graves on the sandbars in states like Uttar Pradesh. The state govt took an active role to remove the grave markers so that an accurate estimate of the numbers could not be ascertained. These were covered by local bloggers, vloggers and news channels.

Kerala is one of the few states that managed medical supplies of Oxygen pretty well. In many other states many died because hospitals ran out of it.


Is that because of the communist government, or simply because the state has a lot of money from the Middle East?


Spending money to remove grave markers is a choice. It's not about how much money you have, it's whether or not you care about the people in question.


In India atleast, 'communism' or 'Marxism' in the names of political parties that actually run a state is just a name that has stuck. These entities and people have to be a lot more pragmatic. This is in contrast to those who are arm chair think tanks that you would find in advisory boards, universities etc. These would be people who do not run for elections.

Now, as for Kerala's handling of Covid, that was funded by state govt coffers. So Middle East money had a negligible contribution. What made a difference though is a history of preference for investing in social safety nets and basic infrastructure for people, such as schools, nutrition, hospitals.


What really happened was that the health authorities in Kerala were prepared for an outbreak because Kerala has had a history of past outbreaks and a health system with very well trained doctors and health professionals to handle it. See the 2018 Nipah virus outbreak in Kerala that was handled really well, there was even a popular movie about it (Virus) that came out the year after.

It's the same story in east Asian countries where they had the SARS outbreak in early 2000s and so they were prepared for new outbreaks.


To be clear I'm not saying Kerala is particularly bad by regional standards, it's not. But compare Kerala and India as a whole with other parts of Asia, they're not doing well. Look at China vs India in the 1970s vs 50 years later. Compare India/Kerala and Thailand in the same time frame. Kerala and Korea, etc etc. South Asia as a whole is doing worse than many other parts of Asia. Kerala government excels at what many socialist governments are good at: Praising themselves. In reality is has made little difference.

India has a lot of other issues, I grant you that the socialist ideology probably had a positive influence in some ways other than economics, particularly socially. But no offense, if you've ever walked the streets Trivandrum and other cities you know there are much more pressing issues.


The article is a classic submarine[0] for Kerala.

"The state has one of the highest concentrations of startups". I laughed out loud at this one. Of all the half-truths peddled in that article, this was easily the most hilarious and egregious.

[0] - https://www.paulgraham.com/submarine.html


Perhaps they mean small business entrepreneurship, not tech startups?


Whatever it is I am not buying it. Kerala is notoriously an incredibly hard place to do business in, so a line like this needs some real hard stats to back it up.


That is because they want to undercut the US and prevent them from making money. It remains to be seen if they'll be as benevolent in making tech open-source if they are the clear winners. Frankly, I don't see why they will.


Classic. If doing good makes someone feel good, then they must be selfish! And their doing good is in fact not good! /s


I'm sorry but your comment comes off as racist. You've given no reason as to why they wouldn't do the same and your opinion seems entirely coming from assumption of ill-will. Why would you think the West have shared their knowledge while China won't? I suggest you reevaluate your biases.


FWIW I am not American or even from the west. At a nation-state level, the west (US) shared knowledge with the intention of offshoring production for cheaper goods, not out of the kindness of their hearts. They still protect key technologies like the jet engine and bleeding edge fab tech. It is the same when it comes to Chinese companies that are at the top of their game - DJI or Bambu labs don't open source their designs last I checked, and their track record when it comes to software is also not great. Simply put, companies only open source when it makes business sense.

I am simply assuming that China follows the same principle - try to wring maximum advantage out of their industrial might and commoditize their complement wherever possible. Because of China's unique single party system, their strategies are very top-down coming straight from the CCP in key areas like AI and robotics. It is not racism, simply realpolitik.


The CCP strictly controls knowledge and information access within China through rigorous censorship. It would be surprising if Chinese companies were allowed to just make their advancements open source without review, if those advancements would have a meaningful impact on the state of technology.

It is strange and inappropriate to call that observation racist.


I don't think it's intentional racism, but a lot of what many western folks think of China come from propaganda. China has visa-free policy for many countries now. Go check it out in person.


To be clear, believing propaganda generally is also not racist, unless that propaganda is specifically racist. And even if you believe censorship and authoritarian oversight of companies by the CCP is western propaganda, it is still not racist.


Things are done for reasons.

We can discuss the reasons why they might or might not open source things, or we can agree not to because someone might call us racist for discussing corporate and wcenomic motivations.

The West mostly has not shared their knowledge. Most software is not open source.


"decision-makers can remain irrational longer than you can remain solvent" - Very correct. Whether or not AI actually comes for your job, the fact that enough people at the top think so is enough to cause trouble.


A non-technical friend was asking about the prospects of AI 'taking over' jobs. I told him that I'm less worried about 'Skynet' than I am about 'Slopnet', where bad takes on the applications of 'AI' just make life harder for all of us. That'll come more from decision-maker irrationality than from the tech itself.


This is the problem. Right now we're not in the, I need AI to work or get out stage. We're in the AI might completely upend reality stage.

It's just people telling stories to find bigger fools. Like the ads claiming they sell an AI employee that never needs sleep and never talks back.

Those ads are the same thing as those ads shoved in the lawn near the mcdonnalds drive through that look like they were drawn with sharpie, but are really mass printed. "Real estate investor looking for pupil, trade my money" kind of stuff.

They are purposefully looking for suckers that would overlook the sketchyness. They don't want normal people applying, that reduces the pitches effectiveness.

Only desperate people who will fall for anything.


Same is true for remote work. All the engineers know the return to work policies are dumb but all the decision makers have decided we’re all wrong.


then why don’t they co-locate teams when they get RTO’d? I keep hearing about people who have to go sit in a mandatory hot desk but are still stuck on Zoom all day. Seems like the worst of both worlds


It’s ordinary corporate dysfunction. The mandates come top-down. People in management don’t think too hard about exceptions. The people making decisions are far-removed from the consequences of their decisions.


It’s not really an exception though. These are the same people who spent the last 20 years singing the praises of offshoring and follow-the-sun. It’s just trend chasing.

Honestly I think the mistake we made was calling it “work from home” instead of “telecommuting”.


> Honestly I think the mistake we made was calling it “work from home” instead of “telecommuting”.

I am curious. Why do you think calling it telecommuting would have made any significant difference? And what difference would it have made? Where do you imagine we would be today if more people referred to it with that word?


My drive-by opinion: "telecommuting" has an advantage in optics/marketing and flexibility over "work from home" for both business leaders and employees. If I tell a board of directors or shareholders that "80% of our workforce performs some fraction of their weekly tasks ______", I imagine the following:

- "from home" elicits images of relaxation and lost productivity, while "via telecommuting" sounds like a commute still takes place and work is just as productive

- "from home" sounds like retreating to a comfort zone, while "via telecommuting" sounds like embracing a new technology or skill

- "from home" sounds like remote workers ought to be performing their tasks from their domicile only, while "via telecommuting" sounds like remote workers can do their work wherever they are

If businesses had adopted "telecommuting" terminology, I believe business leaders would not feel obligated to push back in order to regain productivity. I think it's easier to attack the trend of WFH given the points above. I actually agree with the proposal that WFH is a weak terminology, but had never sat down and thought about it before.

From the perspective of initial adoption, I think it would have happened just as fast. Workers were thrust into remote work arrangements during COVID, and everyone would have quickly gotten the gist of what "telecommuting" means, so it would have been the new buzzword to attract talent in job listings just as "remote" or "work from home" have been. Just without the downsides in CEO perception.


The only problem is that telecommuting is not a new buzzword, it was all the hype in 90s/00s, which is probably reason enough to pick a new word.


Yes and business leaders ate it up. They only pushed back when it was rebranded as “work from home”.


RTOs generally have nothing to do with any of the things they say. They are just layoffs.

You can't argue with them about the effectiveness of remote work. They aren't trying to optimize work. They are trying to fire people.

Working from home doesn't fire people, being more productive and happy doesn't fire people. Your mental well being doesn't have any bearing on how many people they need to fire.


Based on their own gut.


Exactly. It’s less about whether AI can replace certain jobs and more about the fact that companies are making decisions as if it will. That alone reshapes hiring, budgets, and job security


We are still doing scrum-like stuff after all. And they are dragging people back to the office. Decision makers have billions at their disposal to be inefficient with.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: