The core for developers will continue to be free and open source – i.e., libraries, language + policy engine, debugger, REPL.
We’ll have a commercial product geared towards ops and security teams. We’re not yet ready to take the wraps off anything yet, but among the things we’re working on or planning: an oso mgt, service, auditing, policy governance, etc. See also: https://news.ycombinator.com/item?id=25443747
But truth be told, if you reach out to us via Slack/email/whatever we will do our best to help you regardless. Every eng on the team takes part in our community support rotation. We typically respond to a new Slack message in <5 min during business hours, and we watch it closely and respond as quickly as possible outside of that. We are eager to help our users.
The commercial support options are for customers that require specific contractual SLAs, etc.
The recent TalkPython podcast "oso authorizes Python" is great for understanding what oso does, how it goes about it, and how oso's role is different from identity providers.
Would even be great to move beyond terms like attributed-based access control (ABAC), which many devs aren't familiar with even if they've actually built those features (e.g., a user can read her own data). ABAC is technically so broad, when in reality it's the underlying patterns that matter, e.g., hierarchies, sharing, multi-tenancy...many more.
This is quite interesting, although I'm worried about the policies being tied to the underlying programming language. While this does allow for additional flexibility, it makes them hard to share in a polyglot shop.
The whole point is to have language agnostic policy definitions. You write policies in the custom policy language, and every application that needs to do permission checks goes through the provided policy engine.
The programming language specific code just is some plumbing for accessing the policy agent.
> The programming language specific code just is some plumbing for accessing the policy agent.
If you want to make policy decisions over your application data you still need to work out a way to move it into the OPA service, or send it as part of the policy request.
You can make oso policies language agnostic if you take a similar approach and have a small number of shim rules that handle the application-specific parts.
I will say though: OPA is an awesome project and we drew a bunch of inspiration from it early on. I'm excited about more people being aware of the power of declarative policy as code :)
You're right, and this is something we've discussed a bunch. Our current view is that:
1. Allowing teams to standarise on an _approach_ and a library in the first place is a huge step up from individual ad hoc implementations.
2. If you have shared logic between applications, you can still write language-agnostic policies and share those between them. If there are parts that are specific to an application/language, its easy enough to pull these into separate rules and add shims.
3. We intentionally wanted to keep the language lean to lower the learning curve. But we're considering adding a standard library in the future, powered by the Rust core, to provide common functionality. The nice thing is that you're not limited by how fast we add those.
We're interested in hearing from folks who are working in those environments to hear what would make the most sense for them.
I'd like to buy into Rust for authentication, but the compile times cost too much money and I wound up purging it.
This library needs a lot of work to speed up compile times before I can use it. Procedural macros are usually opt-in through cargo features because of their extreme hits to compile time (this includes stuff like serde), and if you're trying to monetize this I'd recommend having precompiled binaries available somewhere so I don't need to pull things from crates.io to compile it cold.
I timed it on a laptop that I know to have similar power to a CI runner. This is after `cargo build`:
Thanks for giving it a try! Were you using this in a Rust project, or building it from scratch for some other language?
For the latter, we do provide libraries for Python, Ruby, Java, Nodejs with the Rust core precompiled. It's a tricky CI challenge for us, but shouldn't affect users.
If you're using in a Rust project, then you will indeed need to pull in our dependencies. Though we are pretty lean on dependencies, and I would expect most projects using oso would be using serde anyway.
That being said those times you're seeing seem very long, even for Rust! Is that building with `--release`? Also, the time to compile it the first time will be slow, esp. including downloading the source. But incremental builds are a lot faster. We do a little caching in CI to make this a bit less painful.
I was just checking the build times from your github repo by running cargo build. If I used it in a project it would be multi-language and require linking against native code through a C shim. It would be great if that was provided and had a stable ABI + .so for the rust code that I could install through a .deb or .rpm instead of building through cargo.
> Though we are pretty lean on dependencies, and I would expect most projects using oso would be using serde anyway.
It would be great if `serde` was opt-in through cargo features, which is pretty common in the Rust ecosystem. While your manifests are small there's still quite a bit getting pulled in. Might want to inspect the dependency graph to see if there's anything excessive or some features that can be non-defaulted.
> That being said those times you're seeing seem very long, even for Rust!
Not in my experience. This was a clean debug build. It's not that uncommon on CI runners to have much longer times than developer machines (MacOS CI runners are especially bad about this, and also cost a lot more money). That's why I do profiling on a weaker one when evaluating build times. Incremental builds are faster, they're also harder to orchestrate.
Not trying to be overly critical, just trying to give a view! Compiling on other people's machines is expensive, a good way for me to buy into something like this instead of doing it myself (which is how I'm doing it right now!) is to reduce the dependency tree and make building stupid fast/easy.
> It would be great if that was provided and had a stable ABI + .so for the rust code that I could install through a .deb or .rpm instead of building through cargo.
This would be a great option. That's effectively how our build pipeline works anyway - we compile the polar-c-api [1] crate to a dynamic lib and embed in each language. If we made this available through package managers, we could also provide an alternative installation for, e.g. Python that didn't even need to use the precompiled wheel.
> It would be great if `serde` was opt-in through cargo features,
We rely on this for the FFI, since we pass events/messages back and forth as JSON. We _could_ make it opt-in for people using it in Rust projects though. Would that work for your use case?
Running our test suite [2] takes about 9 minutes. 4 of which is running (a) the pure Rust tests, (b) compiling the C library and running Python, Ruby, and Java tests, (c) building the WASM library and running Nodejs tests.
That's why I'm a little surprised by the time. You're right though, I didn't factor in the slowdown for the macos builds.
> Not trying to be overly critical, just trying to give a view! Compiling on other people's machines is expensive, a good way for me to buy into something like this instead of doing it myself (which is how I'm doing it right now!) is to reduce the dependency tree and make building stupid fast/easy.
Thank you! I appreciate the feedback. We did go out of our way to make it as easy as possible to install from whatever language package manager (e.g. pip install oso will take seconds). We weren't anticipating people would want to compile it themselves from scratch, so it's a helpful perspective to get.
I looks like it’s written in Rust which isn’t super special but would be far easier to embed as a library in other languages compared to Go in OPA’s case. Polar also seems like a nicer policy language (strong Prolog vibes) though this is subjective as I’ve not done a deep comparison yet.
Yep, you're right on this one! We went with Rust because we wanted to make it embeddable in other languages.
And on the policy language. Polar is indeed a prolog variant. You can see pretty early we took the decision to diverge from a more familiar prolog syntax [1] because we wanted to make it a little more accessible.
oso was designed to be embedded directly in applications as a library. So there's no additional service to run to use it to make authorization decisions in an application.
The other part is that oso policies are written directly over your application data - e.g. the classes/types, and can access attributes and call methods. There's no integration work needed to get application data available to the policy to start making decisions, nothing to keep in sync, and no additional network overhead.
Looks cool, but I feel it doesn't really help with the main goals that OPA, or any other policy engine is trying to achieve (IMHO): decoupling the authorization part from the application.
Do you have any particular use case on why this is an advantage? Maybe I'm laking the main purpose on having everything in the application if in the end the goal (or at least the one I perceived) is to keep the authorization decoupled.
We talk about this in our design principles [1]. But I'll give you a summary here too. We often use GitHub as an typical use case [2].
In GitHub you might be able to merge a PR in a repository because you are the repo owner, or you were invited to it. Or because you're an organization admin, and the repo is in the same organization. (These are all common scenarios in b2b saas apps).
When a user attempts to merge a PR (or hits the API) you need to make that authorization decision based on what the application knows about the user.
So how do you allow the policy to access this information? Your options are basically: (a) you make it possible for the policy engine to independently lookup the data. You are now building a distributed monolith, since any change to the data requires updating both the policy engine and the application. (b) You send the relevant data into the policy decisions. Knowing what data you need to send to the policy is another form of coupling.
OPA is sort of a combination of (a) and (b). There is an API for sending authorization data to it, and you can also send data along with a request.
The problem is that the line between authorization and business logic is _so blurry_. Me being a member of a github org is fundamental application logic. But also crucial to making authorization decisions.
Because of this, traditionally people write this logic as a part of the application code. Normally a bunch of `if` statements. This is hard logic to abstract well in an app, because it's normally a bunch of conditionals and a decision flow. Which it turns out prologs/logic-based languages are great at solving (same conclusion OPA folks reached).
So given all this, the balance we wanted to strike was: decoupling the logic, but not the data. If your app already defines what a User is, what a Repository is, how a User becomes a Member of a Repository, then write your policy over those things, instead of re-implementing all of that logic elsewhere.
That being said, there are cases where the OPA-style decoupling works fine since there isn't the same kind of business logic/authz logic distinction. Or where the decision is being made over the entire input by default (things like checking terraform files, or other infra use cases).
Would you consider something comparable to an OPA server as a future possibility for oso? I think that’s something that’s really valuable about OPA in a microservices environment - having a single place to consult for and log policy decisions.
I suppose I could implement an OSO server to perform this function.
I’d also be interested to know if you’d compared the performance of oso policy vs Rego on comparable input.
Absolutely. I think it would look a little different, but at a high level would achieve the same goals - a single place to see policies and auditing of decisions.
We've done some internal proof of concepts of this, and have discussed it with various folks, and happy to share more of this with anyone interested.
> I’d also be interested to know if you’d compared the performance of oso policy vs Rego on comparable input.
Not yet, but this a great suggestion. I would be a little worried that it would be hard to do it in an unbiased way. The two have different design choices, etc.
I’m unfamiliar with the oso architecture so this might not be how it works, but I’d be interested in particular in the performance with ‘large’ input objects (~5mb+ in JSON). They have a good page in their documentation here: https://www.openpolicyagent.org/docs/latest/policy-performan... which explains the loaded data is around 20x the input - we’ve had to work around this. Though we are using OPA for something which it’s not really designed for too...
Oso seems to support this by ‘registering’ objects in the host language, e.g. https://docs.osohq.com/using/libraries/ruby/index.html#worki... in Ruby. I like the look of this and I presume that this is how you’d rather have users build more advanced policy than extending what seems to be a pretty lightweight language (https://docs.osohq.com/using/polar-syntax.html). OPA seems to have gone the other way and has a lot of functionality in the language for common cases, e.g. x509, jwt, etc. Would I be right in assuming that the current design decision is not to add such functionality to oso?
In general, you shouldn't really need to pass in large input objects to oso - it operates over the application data.
What this means practically is either the data is processed through whatever data access layer you have (i.e. SQL, or an ORM). And there's more work we're doing here to make that experience seamless [1].
Or if you do have some large input data and you iterate over it in the policy, then the oso host library (the part in your app) will just iterate through it without sending the entire object back and forth.
> I presume that this is how you’d rather have users build more advanced policy than extending what seems to be a pretty lightweight language
Yep, that's the idea. I answered a similar question here [2]. You can call class + instance methods from Polar, so if there's anything you can't do you can add it that way. We have considered/are considering adding a standard library to provide common pieces out of the box, but it's not a limiting factor for using oso currently.
There are some side benefits though that a standard library would provide - like having a robust implementation of common operations in the Rust core.
Thanks for the prompt & detailed responses in this thread.
It’ll be interesting to see what this looks like in a polyglot stack, I think that’s where having functionality in in the policy language might be really valuable.
For the time being, it’d still be possible to build a oso based policy server which applications invoke to make consistent decisions - so there are options to achieve this in the meantime too.
Agreed! And I strongly recommend to anyone thinking about doing so to join our slack [1] and come chat with us :) We'll be happy to share our thoughts on this.
All these questions are also tempting me to go and put together a demo for this too...
Thanks to OP for posting this. A very pleasant surprise to wake up and find this here :)
I'll be around to answer any questions people have, but you can also find myself and the team in our community slack: https://join-slack.osohq.com/
All the docs are hosted here: https://docs.osohq.com/
And we have written up various articles on using oso and some of the internals on our blog: https://www.osohq.com/company/blog