That makes them much easier to read though, its so hard to find a specific statement in English compared to math notation since its easier to find a specific symbol than a specific word.
If you make an LLM which design goal is to state "I do not know" any answer that is not directly in its training set, then all of the above statements don't hold.
Being a "University Professor" means jack shit unless precisely in their (sub)-field. The authors are experts in biology, and evolution of information representation/communication, and about misinformation.
I'll gladly defer to their expert opinion on those topics, but IMO to use such an authoritative voice when they are not experts in actual AI systems. Judging the massive progress in the field of AI, how can anyone even remotely state what these systems inherently are, when they are still so new and ever-evolving?
> currently designing a course to hopefully arm medical students with the required basic knowledge they need to navigate the changing world of medicine in light of the ML and LLM advances
Could you share what you think would be some key basic points what they should learn? Personally I see this landscape changing so insanely much that I don't even know what to prepare for.
Absolutely agree that this is a fast-moving area, so we're not aiming to teach them specific details for anything. Instead, our goals are to demystify the ML and AI approaches, so that the students understand that rather than being oracles, these technologies are the result of a process.
We will explain the data landscape in medicine - what is available, good, bad and potentially useful, and then spend a lot of time going through examples of what people are doing right now, and what their experiences are. This includes things like ethics and data protection of patients.
Hopefully that's enough for them to approach new technologies as they are presented to them, knowing enough to ask about how it was put together. In an ideal world, we will inspire the students to think about engaging with these developments and be part of the solution in making it safe and effective.
This is the first time we're going to try running this course, so we'll find out very quickly if this is useful for students or not.
IIRC there base kit like garbage collector and runtime are all the same between community and enterprise. The biggest feature that comes to mind that is behind the enterprise-only wall (or at least used to be) is Profile Guided optimization.
Keep in mind that getting meltdown to work might be very difficult depending on your setup. I wouldn't have been able to at least when starting out my teacher didn't provide us with targetable hardware.
A spectre (particularly RSB-based ones) are nice to start out with imo.
Yea fair, this is obviously a high level overview. I think I found with meltdown that I needed the assembly code. I also was able to reproduce it with actual C code if I recall correctly but that was way more finnicky.
From a security perspective a lot of changes to this world would be an upgrade. However implementing security features is always a trade off, and sometimes good security is just not worth the loss of other things.
My favourite phenomenon is when (computer) security gets in the way of (actual) security.
For example, you implement a super secure solution and no one hacks your website, but you end up being very unproductive and can't find a job. You lost food security.
In covid, bank systems in my country were so hard to use, there were like 6 passwords to login. Not only was usability compromised in the sense of security, but people, especially old people, started lining up in banks, compromising health security.
To say nothing of the scenarios were users just bypass obnoxious exaggerated security systems, like leaving a post-it note with a password on their screens.
1. How does the software obtain new data at run time?
2. How do you make sure that thing doesn't pose a security hole when a vulnerability gets discovered? (assuming this never happens is unrealistic)
Vulnerabilities in what though? If you make an application so simple that it can only fetch data through an API and display, there's simply not much more that it can do. And a simple application is easy to audit. So it would be ideal if we could bundle this (akin to compiling) and deploy on bare metal.
The answer to both questions is robust organizational infrastructure. To be frank, I think a minimal linux system as a baseline OS serves most use cases better than a bare metal application, but many applications have self-contained update systems and can connect to networks. Self-repairable infrastructure is a necessity, both in terms of tooling and staffing, for any organization for which an outage or a breach could be catastrophic, and the rise of centralized, cloud-reliant infrastructure in these contexts should be seen as a massive and unacceptable risk for those organizations to take on. Organizations being subject to unpatched vulnerabilities and inability to manage their systems competently are direct results of replacing internal competency and purpose-built systems with general-purpose systems maintained and controlled by unaccountable distant tech monopolies
> the rise of centralized, cloud-reliant infrastructure in these contexts should be seen as a massive and unacceptable risk for those organizations to take on
I agree with you but I also want to play the devil's advocate: using software like CrowdStrike is not what I would call being "cloud-reliant". It's simply using highly-privileged software that appears to have the ability to update itself. And that is likely far more common than cloud-reliant setups.
Yea, and use of highly privileged software with the ability to update itself that the organization has no oversight of should be the most suspect. Software is used by nearly every organization for drastically different needs, and I think there will never be adequate security or reliability for any of them if software providers continue to consolidate, generalize, and retain ever more control of their offerings. Personally, I think the solution is local-first software, either open-source or grown within the organizations using them, which necessitates having that capability within orgs. The whole "buy all our infrastructure from some shady vendor" model is a recipe for disaster
How are those arguments against kernel level detection from a security perspective?
His arguments show that without kernel level, you either can't catch all bad actors as they can evade detection, or that the latency is too big that an attacker basically has free reign for some time after detection.