> Our findings indicate that the importance of science and critical thinking skills are strongly negatively
associated with exposure, suggesting that occupations requiring these skills are less likely to be impacted by
current language models. Conversely, programming and writing skills show a strong positive association
with exposure, implying that occupations involving these skills are more susceptible to being influenced by
language models...
Am I reading this correctly that the assumption here is that programming and writing skills aren't reliant on critical thinking?
There is also a table which indicates exposure to LLMs in various models and it shows Mathematicians to have 100% exposure. This bit is more puzzling to me. Maybe I am misunderstanding something here.
Programming skills that do not need critical thinking are more susceptible of being influenced indeed.
Don't forget that a lot of science requires computer programming these days.
This is the root of it: The more "genericc" your work is. The more its "out there on the internet" the more GPT can learn about it..
So, a lot of engineers that are just doign teh same old trick: Writing HTTP endpoints, parsing json. Mapping data types.. Yes that could be automated.
However, modelling a problem domain to code, and the core business logic of your code, which is where your "added value" comes from. And is mostly unique: Thats hard for GPT.
This is also why I try to convince engineering teams to optimize for maximum time spend on the core added value logic. The business logic layer. Not all the fluff around it, such as parsing, serialization, authenitcation, database connection.. These should be a constant cost C, once they setup you spend most of your time on the business logic.
When you see GPT program, its just repeating tricks to simple problem over and over again.. Its not really good yet
> So, a lot of engineers that are just doign teh same old trick: Writing HTTP endpoints, parsing json. Mapping data types.. Yes that could be automated.
And to be fair, automation for all of that already pretty much exists.
And to be fair, even though it exists, there's a huge majority that is done manually, even though it needs zero or very little manual "creativity" between specification and implementation
I agree. Usually you're already working within some framework or DSL where you can describe what you want to do. Ideally, you already have an idiomatic codebase enabling you to succinctly transform specifications into code.
Let's take parent poster's issues:
> Writing HTTP endpoints, parsing json. Mapping data types.
The generative model (for now) won't figure out for you: authentication, authorization, input form schema, JSON schema, required & optional fields, field constraints, entity modeling, indexing, query optimization, just to name a few basic issues we are looking at when "just developing CRUD apps".
If any of those go bad, it would result in 400s, 500s, performance or security issues.
It is exists where it can be supported. Lots of small businesses don't have the bandwidth to maintain additional infra that automates this sort of work.
Which sorta brings me back around: it's likely the Big Corps that are going to be trialing GPT first because they have the excess money and resources to play with it. How useful will it be in the end?
> The more its "out there on the internet" the more GPT can learn about it.
Interesting point. Do you think this will mean less and less domain experts will share their specific domain knowledge on a subject on their own personal blogs / twitter / open internet just so it can't be mined by ChatGPT?
This makes sense. There’s also a huge corpus of text (training data) available on internet for the inherently repetitive or general tasks which is helpful for this systems.
But I wonder how do they go from this to mathematics using the same line of reasoning while we’ve seen that math is not LLMs’ strong suit.
Also thing about the huge corpus of text not available on the internet, but available to these systems (just because e.g. Microsoft has it, so can get data, perhaps anonymized from private GitHub repos, Copilot, telemetry from WSL, VSCode, and Azure, and so on).
>Am I reading this correctly that the assumption here is that programming and writing skills aren't reliant on critical thinking?
No, they're just listing some skills that have both negative and positive associations with exposure. I don't think they intend to make a statement about whether the skills themselves are correlated. It's possible for them to be positively correlated with each other, even if one is positively and the other is negatively correlated with exposure (think multidimensional vectors).
> There is also a table which indicates exposure to LLMs in various models and it shows Mathematicians to have 100% exposure. This bit is more puzzling to me. Maybe I am misunderstanding something here.
Right, another one that stood out to me is the listing of financial investments as the most affected industry. I'm certainly not letting GPT-4 make investment choices for me. I guess it could summarize analyst reports or something? They seem to be making some very speculative assumptions about what ML will be capable of in the future. The paper would be more useful if they didn't go off like this and stayed closer to published ML research.
1. Specifying. You are working on getting all requirements, and laid out the specification of the expected behavior of a system.
2. Translating. Once the specification is a nailed down. It would be taken into the hands of translators and put into actual code.
Both involves critical thinking, but translators probably more susceptible to LLM's negative influence.
Also any programmers at one time plays both roles, so it is not about a particular person is going to be deemed useless, more like that part of programming work (translating), is discounted, not longer as valuable, for everyone.
May I add "scheming" to your list of programming activities?
At the macro level, this would be system design. Making sure that the architecture is extensible, transparent to failures, and easy to understand and develop for.
At the micro level, this would involve coming up with clever algorithms to solve specific problems. In ways that are simple or efficient or parsimonious.
In any case, this scheming activity, which would slot in between the specifying and translating that you speak of, would involve deeply understanding both the specification (and how it might evolve) and computing substrate (its APIs, what is efficient and what is not, etc.). I might even call it some combination of wisdom and deviousness?
I find GPT-4 awesome and certainly it will impact "programming", it's an open question how -- will there be a superclass of GPT enabled programmers that will take the jobs of the rest?
Right now GPT-4 is helping me solve real tasks at work and it feels like I'm the only accountant who has xcel, but surely others will catch on.
My hunch here is that, because of the sometimes haphazard hallucinations of ChatGPT, you would need to always review the code that ChatGPT has crated. Also as in a group of people you sometimes agree on specific styles, but I think that ChatGPT won’t adhere to such things, making its code unfamiliar and really hard to follow. We as humans have a hard time agreeing on how code should be written, does ChatGPT have a better notion? And how maintainable is that? How does it make a change?
>Also as in a group of people you sometimes agree on specific styles, but I think that ChatGPT won’t adhere to such things, making its code unfamiliar and really hard to follow.
ChatGPT wont, but a facade variant specifically trained and marketed for code probably would. It could even had a configuration for coding style, formatting and linting rules, and programming paradigm (more functional, more declarative, invent a DSL, and so on)
Aren't they? I'd say they can be reduced to a number of architectural tendencies (e.g. composition over inheritance, DSL or language-native code), go-to design and code organization patterns, and pure stylistic choices (like variable naming, short or larger functions, etc.)
Am I reading this correctly that the assumption here is that programming and writing skills aren't reliant on critical thinking?
There is also a table which indicates exposure to LLMs in various models and it shows Mathematicians to have 100% exposure. This bit is more puzzling to me. Maybe I am misunderstanding something here.
edit: styling