Programming languages can't all be picked up that quickly, since semantics may differ a lot. You can't say "Give me any programming task and I'll pick it up".
What if you're a Web guy and the task is a C++ medical application? What if we're a finance company and the language is Haskell or ML? There's no way someone with no experience in a related language could become proficient without a fairly long training period in the company. What if I'm a hardware IP company and the language is Verilog?
To pretend that what takes many years and domain knowledge to become proficient in, you could just "pick up" on the go, is what sounds a bit entitled to me.
> Programming languages can't all be picked up that quickly, since semantics may differ a lot.
The second day with my new employer I was tasked with writing an analysis project in a language I'd never even heard of before joining. Within two weeks, I was writing code in that language every bit as good as my coworkers'.
Your statement may hold true for people who know one or two languages (like most CS graduates these days), but once you know a wide array of languages (my list is Python, C and C++, OCaml and SML, Lisp and Scheme, Perl, and the semantics but not the libraries of Lua/Ruby/Java/Erlang/Haskell) picking up a new language really isn't a challenge at all: the semantics almost certainly fit somewhere in the space of the languages you know, and what small bits do not are easily learned.
(As a side note, that's why I still put OCaml, Erlang, and Lisp on my resume; not that I imagine very many employers would be interested in working in those languages, but because I want them to know that whatever they do their work in likely lies within the range of my existing semantic knowledge.)
Yes, my point was focused on this specific case (explaining why it sounds entitled for a fresh graduate to say he can just pick up anything), not on someone with a very wide experience.
You seem to have neglected the part where I was talking about domain knowledge: I still think it wouldn't be easy for even a good programmer to start coding the control software for a microscope if he has no experience in that domain (DSP, image processing, etc). Even if the language is C, the algorithms require very careful consideration and more than a couple weeks' worth of study. Same would go for a programmer-analyst at a financial company, etc.
For the extreme example, just go ahead and tell a verification engineer that you'll write good Verilog within 2 weeks ;)
Verilog, financial analysis, openGL programming, DSP programming and other specialized fields of programming are different than the typical web app, C#, java, php, ruby on rails, enterprise, GUI app programming that are a large part of entry level jobs.
>Within two weeks, I was writing code in that language every bit as good as my coworkers'.
Just cause your coworkers sucked doesn't invalidate the point. I challenge anyone to write good code in a new language after 2 weeks of learning it. You can hack, sure, but I've never seen anyone write anything respectable in a language they've only been in for 2 weeks. In the worst case they think it's good and it's hard to teach them how to do it right because they think they know it all already.
Also, if you claim to know the semantics but not the library of a language, you probably don't actually know the semantics. Corner cases matter.
EDIT: An HNer says without evidence that my coworkers suck and gets upvoted; I come to their defense and get downvoted? What strange world is this in which baseless accusations are encouraged?
> Just cause your coworkers sucked doesn't invalidate the point.
My coworkers don't suck. In fact, I've yet to meet a technical person here that I can confidently say I'm smarter than.
> I challenge anyone to write good code in a new language after 2 weeks of learning it.
I've taken your challenge and passed. Are you calling me a liar?
> Also, if you claim to know the semantics but not the library of a language, you probably don't actually know the semantics. Corner cases matter.
Libraries are molded by semantics, and not vice versa. It's certainly necessary to know the libraries in order to be proficient in a language, but it's not at all necessary to know the libraries in order to know its semantics well.
>I've taken your challenge and passed. Are you calling me a liar?
I'm calling you not particularly self-aware. After a year of coding in that language, the code you wrote after 2 weeks will likely look like crap to you. Unless that language is so domain specific that there's no room for design or so derivative that you can apply another language's style 100%, you almost definitely still have more to learn after 2 weeks.
RE: Semantics, that's all true, but if you don't have enough experience in the language to even say you kinda know the library, you probably don't actually grok the semantics. For example, Java, can you tell me the difference between volatile and synchronized as it regards the memory model without looking it up? What's a happens-before relationship and how is it relevant to those keywords? What's the transient modifier do? Transient's pretty irrelevant these days but if you can't answer the first 2 questions without going to google, you don't know the semantics of Java.
As I've gotten more experience, my estimation of my own knowledge has consistently gone down. I think it's a good approach.
> Unless that language is so domain specific that there's no room for design or so derivative that you can apply another language's style 100%, you almost definitely still have more to learn after 2 weeks.
There's a vast chasm between "understands the language's semantics" and "has no more to learn." I've been programming for over a decade now and there's no language about which I can say "I have no more to learn." If all you're saying is that it takes more than two weeks to know a language completely, you're not saying anything useful at all.
With regards to the language's domain specificity, it's a standard procedural language, with types and functions and values and loops and all the ordinary stuff that languages like C and PL/I and FORTRAN and Modula-2 all have had since the 60s and 70s. It has first-class and higher-order functions, like Lisp has had since 1958. It has a few declarative features mildly reminiscent of Prolog to enable transparent parallelization.
There is extremely little that is new in production programming languages these days. Practically every feature of every language that anyone significant is using in production has some analogue in the languages I know. What little remains is not difficult for me to learn because programming languages are my personal area of interest in computer science. I'm not being arrogant here, I'm just being real.
Take a look at Go. A fun post was made soon after Go was released which compared it to "Brand X", an unnamed language few people have heard of: http://www.cowlark.com/2009-11-15-go/ . It turns out that practically every idea in Go is also present in Brand X; it wouldn't be unreasonable to call Go "derivative" with regards to its nature, if not the process by which it was created. I'll spoil the big reveal for you and save you the time of reading the post: "Brand X" is Algol 60, designed and implemented half a century ago. Practically every language can be dismissed as "derivative" if you're looking for a way to negate good programmers' abilities to learn something new.
> Semantics, that's all true, but if you don't have enough experience in the language to even say you kinda know the library, you probably don't actually grok the semantics.
You're wrong. Understanding of a language's semantics is prerequisite to understanding its library. Before a person can even consider understanding and learning the libraries for a language, he must understand the semantics of function calls, conditionals, loops, etc. He must understand whether the language is call-by-value or call-by-name or call-by-need or call-by-object-reference. He must understand how the language handles scoping. He must understand which entities are first-class and which are not. All of this is necessary (though not sufficient) for even beginning to grok the standard library. On the other hand, understanding the standard library is absolutely inessential to understanding the semantics of a language: if a programmer doesn't know what functions and types are available in the standard library, with knowledge of the language's semantics he can reimplement them easily.
> For example, Java, can you tell me the difference between volatile and synchronized as it regards the memory model without looking it up? What's a happens-before relationship and how is it relevant to those keywords? What's the transient modifier do?
I didn't claim to be a Java guru. Very few Java programmers actually need to know the answers to those questions in day-to-day programming; I could probably (if asked to, which I'm not, thankfully) write thousands or tens of thousands of lines of Java without ever needing to know the answer to those. There's certainly a difference between my understanding of Java's semantics and the understanding necessary to answer the questions you're asking, but for the vast majority of tasks, productivity happens far closer to my end than yours.
>> For example, Java, can you tell me the difference between volatile and synchronized as it regards the memory model without looking it up? What's a happens-before relationship and how is it relevant to those keywords? What's the transient modifier do?
>I didn't claim to be a Java guru. Very few Java programmers actually need to know the answers to those questions in day-to-day programming; I could probably (if asked to, which I'm not, thankfully) write thousands or tens of thousands of lines of Java without ever needing to know the answer to those.
You're wrong, and this proves the point. This isn't 'guru' level, the workings of volatile and synchronized is even on the basic SCJP test. This may not come up in your domain, but if the project deals with threading, an experienced dev will be more productive than you right off the bat, after two weeks, and probably after a year.
In fact, threading is pretty tricky, so it's likely you will be counter-productive to the team until you've made all the Java threading mistakes you need to make in order to understand why your code is bad. And, you might never know your code is bad unless a domain experienced Java dev explains it to you.
Well, I suppose we're operating with a different definition of "understand", regarding the semantics.
RE: Sawzall, haven't used it, but assuming it's similar to Pig, that's pretty domain specific. You'll probably write different code a year from now than you're writing now, though.
It's more than your respective definition of "understand". You learn differently.
You sound like a "practice" guy. To learn a language, you need to see and write code in that language. You take a good look at the standard library, you practice, you learn the trivia, how things are done, and as you become proficient, you acquire a good intuition of the language semantics.
jemfinch, on the other hand, is more a "theory" guy. To learn a language he needs the (reference) manual. He reads it, compares the various features of the new language with the ones he already know, which let him get the semantics fairly quickly. Proficiency only comes when he learns the trivia as he needed, with practice.
The difference between the two of you is that for you, understanding takes practice, while for him, it's a prerequisite to practice. You could argue that both approaches eventually amounts to the same (to write actual code, you need proficiency anyway), but they do not.
Programming languages aren't just rigid tools your boss forces you to use (or at least they shouldn't be). They're something you should chose, and sometimes transform, or even build. In such cases, you often need to choose between several languages, some of which you never practised. Some in which no code were written yet. So you need to judge the languages for themselves.
Furthermore, the "practice first" approach tend to make you blindly parrot current styles. That makes you more likely repeat past mistakes without realizing it. The "understanding first" approach may be more error prone at first, but at least here you can confront 2 styles (your own, and the established practice), and challenge either of them.
It's not so much that we learn differently, I think everyone in CS who's any good is a "theory" guy to some extent.
But understanding the basic flow of a language well enough to write "hello world" doesn't mean you fully grasp the semantics. Sure, you need to get a basic understanding down before you start coding but you don't actually understand things until you've had to look at them from a few angles. I used to assume I had something once I had the basic idea, but I've been bitten enough times by all of those unspoken assumptions that I have a healthy respect for the practice as well, is all. First inclinations notwithstanding.
> Programming languages can't all be picked up that quickly, since semantics may differ a lot. You can't say "Give me any programming task and I'll pick it up".
Speak for yourself. I learned AutoLisp with one bit of sample code someone left behind and a bit of Googling in about a week, starting from zero. Nobody else where I worked knows it at all. But from that, I coded a full set of functions that did everything we need.
I reverse-engineered the semantics of some crazy internal scripting language to some of our industrial machines. Nobody bothered to document that thing at all and it was custom to my employer.
I learned to use a different testing tool from nothing more than a bit of sample code and being told that it was based on VBS. I then improved the code I was given.
Some of us really can figure out the semantics of something just from reading a few examples. This also works with foreign languages. Particularly if I can get parallel translations (e.g. subtitles), I can start working backwards to extract meaning and solve for unknown words.
In short, you'd be surprised about how quickly we can figure out things, especially if we've seen something like it before. Sure, there are pitfalls, but anywhere where you have nice syntax aware IDEs and whatnot, it's not nearly as much of a barrier as you seem to think.
The more languages you've used, the easier it all is. This goes for both human and computer languages.
Once you know one language well, many of the skills transfer. Besides, he states he is looking for an ENTRY level position and fully realizes he's not a guru.
Yes many skills transfer. Like how one ane write C++ from readin the docs without ever learning C++ or how one can write C++ having only ever learned C. The part you're missing is that it doesn't make you any good in C++, in fact I'd argue that it makes you terrible if you're going to try to write X code in Y language. The fact that it's entry-level doesn't mean anything. A lot of times these entry-level jobs aren't something you can just go and start pasting code together.
What if you're a Web guy and the task is a C++ medical application? What if we're a finance company and the language is Haskell or ML? There's no way someone with no experience in a related language could become proficient without a fairly long training period in the company. What if I'm a hardware IP company and the language is Verilog?
To pretend that what takes many years and domain knowledge to become proficient in, you could just "pick up" on the go, is what sounds a bit entitled to me.