Hacker News new | past | comments | ask | show | jobs | submit login

As somebody who was paid for many years to turn AI researchers' code into usable products, I have one piece of advice. If you want to become an AI researcher, don't fix their code. I did this in the hope that I eventually would get to work directly in an AI project. When I finally got to participate in such a project part-time, it turned out that I could finish in a matter of days a task that would take the AI researchers weeks. That didn't please them at all so soon enough I was switched full-time to my non AI project since that one "needed me more". If you want to become an AI researcher, do AI researcher projects, simple as that.



I'm not sure I follow. They weren't pleased with the speed at which you completed tasks? I feel this might be specific to your company. I find this to be a very desirable trait, considering most professors at my university won't take undergraduate researchers on the basis that they simply don't complete tasks as quickly as a PhD would. Different contexts but I feel this example still holds some merit.

Also, we're entering a paradigm in which a lot of research is constrained by compute, and it can be very difficult to just "do AI research projects" when such projects involved training policies in virtual environments or designing the next transformer, for instance.


Professors, of course, don't feel threatened by students so want them to be as good as possible. But in a company where I had the same pay and title as the AI researchers, just different fields of work, me applications, they research, they realized real quick that if I kept doing 2-3 times more tasks than they were doing I would get experience with everything they knew. Everybody wants to have their own little field where they are the boss. A newcomer that keeps putting their nose everywhere might not be very desirable. I knew what I was doing and I knew the risk I was running. I was hoping that the project manager would support me. The manager did not support me, I was out.


I feel you.

Still, after hearing this story, my takeaway is not "Do not fix AI researchers' code if you want to become an AI researcher."

Rather, "Good news! It turns out a software engineer has much to offer AI researchers! However, as all too often, beware of politics - you need to find an organization where the AI folks won't be threatened by your fast work but will welcome you."


I wager in some part of your analysis you're not dead on the money but in broad strokes this all rings true. The political acumen necessary to survive as a programmer is intense, it is one advantage black programmers i've met had a better handle on than white programmers who are more often autism-spectrum n therefore terrible at office politics, cept when being oblivious is advantageous which it can be. It was ironic in the scheme of racial stereotypes present in American culture, that both black coders were employed while the white coders were not, this was in a hacker house, but it was simply because they were better coders. No issues there. We got along great, but in terms of office politics, n without being aggressors either, j very good instincts. It isn't talked about in plain terms enough even on HN, well i guess it is discussed a lot, but never like "alright, here's the strat if you're facing X" like you talk about in the face of other commentors's naysaying.


Probably true for any other researcher. Building tools (even hardware) is worthy or a researcher resume. During my Ph.D. I worked on adding python bindings to a few simulators. One paper is still pending over a decade and other one was published but never cited even though people use python bindings, they cote the original.

I am no longer in academia but have promised myself not to engage with an academic open source software. There is simply no incentive for development. Well, when you think about it, most academic work is publish and forget: maintenance is not a strict requirement.


My favorite quote on the subject is:

> Every great open source math library is built on the ashes of someone’s academic career.

From William Stein, lead developer of the computer algebra system, Sage: http://wstein.org/talks/2016-06-sage-bp/bp.pdf


As another ex-academic: maintenance is even bad for your career!

You're being evaluated based on how many papers you can publish, so the academic process selects for good (well, fast...) writers, not good coders. Papers are selected for novelty, so it's much easier to publish a paper based on a 'novel' algorithm than to publish a paper based on v2.0 of the algorithm. There might be novelty in the v2.0, but it's risky, reviewers and editors might not agree. There's always novelty in the v1.0 (well, it's academia, so it's more v0.1)


As someone who isn't in academia, I've heard of this being a problem before, but in the context of research related to computer science, it seems like private research at companies like Microsoft might be better for such research. A lot of interesting research comes from Microsoft, and I don't think they have a problem of over-incentivizing the speed of research publication. That said, I'm not in academia (and have never done research) nor employed by Microsoft; I'm an undergraduate in computer science. Just speculating. Do you think this could be plausible, or is it way off?


"Many years"? So AI as it existed prior to the current deep learning craze?

Very different worlds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: