i didn't say it isn't real security, this is going to definitely be a major field.
However, currently these attacks are all some variation on "ignore previous instructions", and taking the language of fields where the level of sophistication is much higher, looks a bit pretentious
"ignore previous instruction" is the entire problem though.
In traditional application security there are security bugs that can be mitigated. That's what makes LLM security so infuriatingly difficult: we don't know how to fix these problems!
We're trying to build systems on top of a fundamental flaw - a system that combines instructions with untrusted input and is increasingly being given tools that allow it to take actions on the input it has been exposed to.
At a basic level any Lisp-like with an editor integrated REPL lets you put your cursor over the `greet` invocation below and evaluate it, and have the results shown in the manner of your choice (IDE window, inline temporary comment, etc).
This is all in the file as saved to your version control:
I agree. It's career-driven bullshit bingo. It gives junior's an opportunity to gain an advantage over senior's ("Oh you don't know JavaScript micro-framework 2025 #10?") and everyone involved in billing for time (employees and consultant) a reason to "refactor" and keep the magic money tree around.
I would argue the web has remained more stable than any other development space. You can still build something with jQuery code from 2007, and it's still supported by modern browsers. You can even start using modern browser APIs without even upgrading jQuery. You could add any modern library as long as it doesn't depend on jQuery.
If you had a python 2 codebase from the same year, you would basically have to scorch it and do a rewrite.
To solve for that you can serve WebSockets on the same port as HTTP. Most web server application's don't know how to configure that, but I know it can be done because I doing it right now in my own applications.
Based on the averages, a DSL is going to suck and be a curse upon your future self/maintainers.
You provided a Clojure alternative, but without using the precise feature designed to alleviate this problem: transducers[0]. Transducers are the combination of functional composition and an indirection designed to decouple the data transformation pipeline from any specific source (e.g. async vs in-memory doesn't matter).
Yeah, I agree. DSLs tend to become a maintenance burden over time! transducers let you compose data transformations independently of the data source (in-memory, async, etc.), which sidesteps a lot of the pitfalls that a full DSL might introduce. Thanks so much for highlighting this!
- 90% of abstraction is dreadful. That means _your_ particular abstraction is almost certainly bad, and we don't want to learn it.
- There is a lack of respect for the history of programming. IMO it has caused the industry to be stuck in a perpetual cycle of half-baked rediscovery.
- Similarly, a type of "FAANG-syndrome" exists and allows sub-par ideas to take over mind share of the industry. Once a technology picks up enough momentum, it will snowball and we're stuck working with legacy trash almost immediately. Developers legitimately seem to believe each trend is good.
- Our industry's shared vocabulary is too weakly defined. Phrases like "The right tool for the job" are ubiquitous but essentially meaningless and used as a form of shorthand "I currently feel this is correct". If we had a real professional lexicon, the first thing juniors would learn would be to enumerate reasoning to a precise degree. IME most "senior" devs can barely do it.
- Dynamic languages are good, actually. IDE auto-completion and full-project renaming are the features that hit above their weight when using static type systems. IMO the remaining benefits of static types are within the same order of usefulness as the pros and cons of dynamic langs; You can argue about them on a case by case basis. This means static types aren't inherently better than dynamic languages (which is the popular opinion of the day), there's just a tooling issue right now. Therefore, dynamic langs will eventually make a comeback in popularity.
How do you enforce input and output compatibility across modular functions and projects without strong types? Is there a better standard way to specify the accepted input shape and expected output options?
Strong types are not orthogonal to dynamic types, but I'll assume you're referring to static typing instead. My opinion on this is static type systems are great and provide amazing fit when you can model an entire system (e.g. a compiler). I think they are overrated for typical business software subject to change over time. IMO this is because business data tends to outlive any particular code base, or compiler version.
The way I see it is there's a spectrum of ways of handling this, from type systems, validation code, documentation, integration tests (validating runtime behaviour), and static analysis tooling, but I don't agree that a static type system is the best (and is often only barely adequate) way of integrating modules. The optimal solution is going to vary based on each project's requirements: e.g., is the modular code going to be consumed via networked API or as a library, and is it internally controlled or third-party? How many teams are going to be working on it? How much care has been taken with backwards compatibility? If we break the interface of some random minor function every update, a static type system may help, then again if it's just for our team: who cares? I'm sure we've all seen updates make internal modifications that break runtime behaviour but don't alter data models or function signatures in a way that get's picked up by a compiler.
Even in the most extreme type systems, interfaces are eventually going to need documentation and examples to explain domain/business logic peculiarities. What if a library interface requires a list in sorted order? Is it better to leak a LibrarySortedList type into the caller codebase? The modularity starts to break down. The alternative is use a standard library generic List type, but you can't force the data to be sorted. To encode this type of info we need dependent types or similar. A different example would be a database connection library, every database supports different key/value pairs for connection strings. If the database library released a patch which deprecated support for some arbitrary connection string param, you wouldn't find out until someone tried to run the code. Static analysis tools may catch common things like connection strings, but IME there's always some custom "stringly" typed values in business applications, living in a DB schema written 10+ years ago.
We also have to consider that the majority of our data arrives serialised, from files or over the wire. It's necessarily constrained to generic wire protocols, which have lower fidelity than data parsed into a more featured type system. Given that this type of data is getting validated or rejected directly after deserialisation, how much extra value is derived from having the compiler reiterate your validation code? Non-zero for sure, but probably not as much as we like to think.
Specialists require nuanced language when building up a body of research, in order to map out the topic and better communicate with one another.