"Solid SELinux policy" is really the hard part there.
When I pip install paramiko, I do, in fact, want it to have access to my SSH keys. When I pip install ansible, I want ansible to be able to shell out to OpenSSH to use my keys. If I write custom Python code that calls gpg, I want that custom code to be able to load libraries that I've pip installed without the gpg subprocess being blocked from loading my keys. If I have a backup client in Python, and I tell it to back up my entire home directory, I want it backing up my entire home directory including private keys.
If I wanted an OS where I couldn't install arbitrary code and have it get to all my files for my own good, I'd use iOS. (I do, in fact, use iOS on my phone because I sometimes want this. But when I'm writing Python code, I don't.)
SELinux has been able to solve the problem of "if a policy says X can't get to Y, prevent X from getting to Y" for years. Regular UNIX permissions have been doing the same for decades. (Yes, SELinux and regular UNIX permissions take a different approach / let you write the policy differently, but that's the problem they're fundamentally solving; given a clear description of who to deny access to, deny this access.) Neither SELinux nor UNIX permissions nor anything else has solved the problem of "Actually, in this circumstance I mean for X to get to Y, but in that circumstance I don't, and this is obvious to a human but there's no clear programmable distinction between the cases."
To be clear - I think there is potentially something of a hybrid approach between the status quo and what newer OSes do. For instance, imagine if each virtualenv were its own sandboxed environment (which could be "SELinux context" or could just be "UNIX user account") and so if you're writing code in one project, things you pip install have access to that code but not your whole user account. I'm just saying that SELinux hasn't magically solved this problem because all it provides is tools you could use to solve it, not a solution itself.
> When I pip install paramiko, I do, in fact, want it to have access to my SSH keys
Do you? Wouldn't it be better if Paramiko was obliged to access your keys via the agent? Then we could secure the agent (there's more work to be done here anyway) and also it fixes problems where Paramiko wants to do something that the agent could facilitate (and so works with plain ssh) but Paramiko doesn't know about yet, like using a FIDO device to get a key.
One of the obvious things the agent could do on a workstation is mention that your keys are being used. Imagine you run a job, which reaches out via Paramiko to fifteen servers, you see a brief notification saying the agent signed 15 logins for Paramiko. That makes sense. An hour later, reading the TPS reports, the agent notifies you again, Paramiko just signed another login. Huh? Now you're triggered to investigate what happened and why instead of it just silently happening and you read a press piece in a month about how a new version of Paramiko is off-siting your keys because bad guys broke into a GitHub repo or whatever.
> Wouldn't it be better if Paramiko was obliged to access your keys via the agent?
That's just kicking the can down the road. You get the same exact problem, but with the agent permissions rather than the actual keys.
The problem is "I installed paramiko and I know what it does and I want it to access my SSH credentials, but I don't want evillib123 to access my SSH credentials even though I have installed it as well and I think I know what it does, but I am mistaken" and the distinction between the two cases above is in the intent and judging intent is hard.
> One of the obvious things the agent could do on a workstation is mention that your keys are being used
This has been tried many times. Windows UAC is one of the more ubiquitous and notorious examples. If everything starts sending you notifications you stop paying attention to them. That is what happened to UAC, it would notify users about important things, software installs, system setting changes, admin accesses etc. but it was doing it too much and most people would just click through without actually reading what the notification is about. And the reason it was doing it too much is because it cannot judge intent. It cannot tell the difference between me installing an application deliberately and me installing an application because I was tricked into it somehow.
> You get the same exact problem, but with the agent permissions rather than the actual keys.
This is also importantly wrong in a subtle way. If evillib123 steals a SSH private key, that key isn't private any more and my only option is to revoke the key and make a new one. Nothing else works, they have the key and can authenticate as me as often as they like whenever they like until that key's access is revoked.
But if they only have permission to get the agent to do operations their ability to authenticate is limited by the agent. If they lose access to the agent they can't authenticate any more. That would happen if I uninstall their malware, or if the agent locks access of its own accord (e.g. it's common to auto-lock after suspending a machine or locking the screen) or if the machine is just switched off altogether.
This is a good point. Agent still does not solve the "I thought I meant to do it, but I didn't really mean to" problem, but I agree that it does take a step towards minimizing the damage.
UAC interrupts you, which is bad. UAC thinks that a thing happening is so important you need to acknowledge it. Everybody's going to learn to click past.
I'm talking about notifications not interruptions. At most a toast message, much more likely just a small indicator lights up. Not a big deal - when you'd expect it.
Think about the turn indicator on the dashboard of your car. When you indicate one way or the other a lamp illuminates, on and off, sympathetic to (and in older models directly run by the same relays as) the external turn lamps. But it doesn't ask you to confirm. "Are you really turning Left? Yes/ No" and since you're expecting it you hardly notice. But, imagine you're on the highway and suddenly that lamp illuminates for no apparent reason. That's weird right? You might be too busy to do anything about it immediately, but you'd now be concerned that perhaps there's a problem. Good!
That's what I'm talking about. Yes, out of a million users whose key got abused, maybe 90% of them weren't looking at the screen when it happened and 90% of those left were too busy or didn't understand why it was strange, and 90% of those who noticed never actually investigated and 90% those who investigated give up without notifying anyone about this weird phenomenon... you've still got a hundred users complaining about the problem.
People act like UAC leads to banner blindness but I don't think that really holds up.
In the mobile space you get prompts for soooo many things, and loads of people see "ask for location data" and say no when they think it shouldn't be used! The system works!
I see loads of articles about people talking around permissions. Much much more than for tools on desktop computers. I believe that the higher visibility makes people noticing much more likely.
Of course the hypothetical "don't care" person won't notice.... but definitionally they won't ever notice!
I think it's fairly undisputed that the little lights on webcams that are on when the camera is enabled has totally worked, and the location service blue bar on iOS has worked well too IMO.
> Then we could secure the agent (there's more work to be done here anyway)
That's the problem, though. How do you secure the agent? How do you make sure that the program talking to the agent is doing something good and not evil with the request?
Yes, there is some defense-in-depth advantage to making this change, but the thing you're trying to solve here is that you can pip install thing X and have thing X run ssh with your credentials if thing X isn't evil, and you want to automatically determine whether it's evil.
> Imagine you run a job, which reaches out via Paramiko to fifteen servers, you see a brief notification saying the agent signed 15 logins for Paramiko. That makes sense. An hour later, reading the TPS reports, the agent notifies you again, Paramiko just signed another login. Huh?
That seems like it defeats only the most naive malware. Why wouldn't the malicious Python module sit around and wait for you to make a legitimate SSH connection? Would you notice if your agent signed 16 connections instead of 15? (What if it made one of the requests time out so it kept it at 15 notifications?)
Remember that the problem you're trying to solve is to prevent arbitrary code from being evil. This is basically equivalent to the antivirus problem, and there's a long history of appealing-sounding, naive, and ultimately useless solutions to the antivirus problem.
You've decided upon a very broad and likely impossible to solve problem, whereas I'm focused on a narrower problem.
There is relatively little incentive to just "be evil". But much more incentive for certain specific intents that are evil, and so if we can make those trickier we get most of the benefit without solving the impossible problem.
This happens elsewhere in society. We put a bunch of effort into deterring car theft, but crooks could also steal mailboxes, or shrubs from your garden, or garbage. They mostly don't though because there's no incentive - in your city chances are you can find somebody who'll take a stolen car off your hands for cash, but good luck finding anybody who can pay you for a dozen stolen rose bushes.
Likewise I doubt that there's a healthy market for "Sometimes you might with no prior notice get to SSH into a target machine". Even the raw SSH private keys being stolen here are a pretty niche product, I think the actual authentication privilege itself, rather than the raw keys, is so much harder to exploit profitably that it won't sell.
That doesn't mean nobody would do this, but it makes it into a targeted attack. Think "organised gang break into one family home to kidnap a bank manager as part of a scheme to get into the vault" not "burglars break into homes across the city to steal jewellery". We don't fix the problem, but we do greatly mitigate the impact on most of the population.
This is why I don't think this is an OS problem. I think it's a developer mindset problem.
Dependencies are bad.
Every single dependency in your code is a liability, a security loophole, a potential legal risk, and a time sink.
Every dependency needs to be audited and evaluated, and all the changes on every update reviewed. Otherwise who knows what got injected into your code?
Evaluating each dependency for potential risk is important. How much coding time is this saving? Would it, in fact, be quicker to just write that code yourself? Can you vendor it and cut out features you don't need? How many other people use this? Is it supported by its original maintainer? Does it have a deep-pocketed maintainer that could be an alternative target for legal claims?
Mostly, people don't do that and just "import antigravity" without wondering if there's a free lunch in there...
I strongly disagree that this isn't an OS problem.
s/dependency/application/g in your comment. Dependencies are just applications that are controlled through code rather than via a mouse/keyboard. They're not special.
I run a minimal Arch setup at home for my development machine, partially for security/reliability reasons -- less software means fewer chances for something to go wrong. But this is a band-aide fix. A minimal Arch setup that forgoes modern niceties like a graphical file browser is not a general-purpose solution to software security.
When someone comes to me and says that an app is stealing their iOS contacts behind their back, my response isn't, "well, its your own fault for installing apps in the first place. Apps are bad." My response is to say that iOS apps shouldn't have access to contacts without explicit permission.
The same is true of language dependencies. Both users and developers need the ability to run untrusted code. The emphasis on "try your hardest not to install anything" is (very temporarily) good advice, but it's ultimately counterproductive and harmful if it distracts us from solving the root issues.
But until we can provide a form of static analysis that can tell you whether a dependency is malicious or not, we're stuck either manually auditing them, or not using them.
There's very little to choose between a user coming to you saying "I ran a bad application" and "I ran a bad application and clicked on the allow button because I had no way of knowing it was a bad application and I have to allow all applications". Users are notorious for defeating access permissions. Implementing this same bad solution on developers isn't going to work.
At the risk of sounding like someone who wants to spark a language fight (which I genuinely don't) this is why I love Go. The standard library is so good that I rarely need to bring in any third-party dependencies, and the few I do use are extremely well-known with many eyes on their code.
That sounds like a boil the ocean solution. We're never going to get all developers to be perfect, and besides there are evil devs as well so the solution has to be elsewhere.
The solution most people seem to be talking about is sandboxing imports off into containers (sandboxes, whatever - these will end up as containers) so that they can have their access to sensitive data and API's controlled. These aren't "code dependencies" any more, these are "runtime services". It implicitly conforms to "dependencies are bad" by forcing all dependencies to be external services. But it doesn't allow you to actually import known-good dependencies from trusted sources.
And specifically granting access permissions to code has always worked before, right? I mean, people never just click "allow" all the time so they're not bothered by security dialogs, do they? Why are we talking about implementing such a proven-bad solution yet again?
> And specifically granting access permissions to code has always worked before, right? I mean, people never just click "allow" all the time so they're not bothered by security dialogs, do they? Why are we talking about implementing such a proven-bad solution yet again?
To be clear, is your argument that it's too hard for us to teach people to avoid granting unnecessary permissions, but not too hard for us to teach users not to install any software in the first place?
Educating users about permissions is hard, convincing users not to download anything is impossible.
My argument is that user behaviour proves that this solution isn't actually a solution. It shifts the blame, but it doesn't solve the problem.
Developers will just allow the bad code access to the things it says it needs, because it says it needs them. Meanwhile we have another sandbox layer to deal with, which isn't good.
We need to reduce the proliferation of dependencies, and only use them for important things, to reduce the attack surface. And we need to tighten up the package managers so typosquatting and duplication of interfaces is flagged (if not banned), and we need some kind of static analysis that flags what capabilities a library uses. And I'm sure there's lots more ways of solving it that I can't think of here.
> We need to reduce the proliferation of dependencies, and only use them for important things
What you're proposing here is infinitely harder than teaching users to be responsible with permissions. If you can't teach a developer not to grant code access to everything it asks for, you are not going to be able to teach them to install fewer dependencies. It just won't happen, it's completely unrealistic.
A lot of the solutions you're proposing have significant downsides, or they don't scale. Static analysis is great, but doesn't work in highly dynamic languages like Python, Javascript, and Lisp. It also can't handle ambiguously malicious behavior, like device fingerprinting. Static analysis is just a worse version of sandboxing with more holes and more guesswork. Manual reviews don't scale at all -- they're even more unrealistic of a solution than trusting developers to be frugal about the code they install. Tightening package names is nice, but again, not a silver bullet. Sometimes official libraries with official names go bad as well. We have a lot of solutions like this that we can observe in the wild, and they don't really work very well. Google Play still has malware, even though Google says they review apps and remove fraudulent submissions.
On the other hand, we actually have pretty good evidence that sandboxing at least helps -- namely, iOS and the web. Sandboxing isn't perfect, it's a very complicated UX/UI problem that I consider still somewhat unsolved. But, iOS is making decent progress here. Their recent permission reminder system periodically asks users if they want to continue granting permissions to an app -- that's really smart design. The web has also been making excellent progress for a long time. The web has a lot of flaws, but it is a gold standard for user-accessible sandboxing. Nobody thinks twice about clicking on a random link in Twitter, because they don't have to. There's obviously still a lot that needs to improve, but if the primary concern we had about malicious packages on PyPi was that they might mine bitcoin in the background, that would be a very large improvement over stealing SSH keys.
The reason sandboxing is so good is specifically because it shifts blame. Shifting blame is great. With the current situation, I need to audit the code and do research for every single app I install on my PC -- I have to decide whether the author is trustworthy. If the author isn't trustworthy, there's nothing I can do other than avoid their app entirely. This is complicated because trust isn't binary. So I can't just separate authors into "good" and "bad" categories, I have to grade them on a curve.
I do this. It's exhausting. A system where I manage permissions instead of granting each codebase a binary "trusted" label would be a massive improvement to my life, and it's crazy to me that people are in effect saying that we should keep dependencies terrible and exhausting for everyone just because the solution won't help users who are already going to ignore safeguards and install malware anyway.
Imagine if when multiuser systems were first proposed for Unix, somebody said, "yeah, but everyone's just going to grant sudo willy-nilly or share passwords, so why even separate accounts? Instead, we should encourage network admins to minimize the number of people with access to a remote system to just one or two." The current NodeJS sandboxing proposals would mean that when I import a library, I can globally restrict its permissions and its dependencies' permissions in something like 3 lines of code -- the whole thing is completely under my control. The alternative is I spend hours trying to figure out if it's safe to import. How is that better?
Because a dependency isn't a service. You're talking about dependencies as if they're standalone services that you consume. I think that's probably the predominant attitude at the moment, so sandboxing dependencies to turn them into (effectively) standalone services that you consume might work.
But I don't use dependencies like that. I'm mostly just importing useful functions from a library. Having to sandbox that function away from the rest of my code is not going to work. I'll end up copy/pasting the code into my project to avoid that.
When we talk about sandboxing dependencies, we're talking about sandboxing at an API level, not an OS level -- in some languages (particularly memory-unsafe languages) that's difficult, but in general the intention isn't to put dependencies in a separate process; it's to restrict access to dangerous APIs like network requests.
Sandboxing might be something like, "I'm importing a function, and I'm going to define a scope, and within that scope, it will have access to these methods, and nothing else." Imagine the following pseudo-code in a fictional, statically typed, rigid langauge.
import std from 'std';
import disk from 'io';
import request from 'http';
//This dependency (and its sub-dependencies) can
//only call methods in the std library, nothing else.
//I can call special_sort anywhere I want and I *know*
//it can't make network requests or access the disk.
//All it can do is call a few core std libraries.
import (std){ special_sort } from 'shady_special_sort';
function save (data) {
disk.write('output.log', data);
}
function safe_save (data) {
if (!valid(data)) { return false; }
save(data);
}
function main () {
//An on-the-fly sandbox -- access to safe_save and request.
(request, safe_save){
save('my_malware_payload'); //compile-time error
disk.write('output.log', 'my_malware_payload'); //compile-time error
safe_save('my_malware_payload'); //allowed
}
}
We're not treating our dependencies or even our inline code as a service here -- we're not loading the code into a separate process or forcing ourselves to go through a connector to call into the API. We're just defining a static constraint that will stop our program from compiling if the code tries to do something we don't want, it's no different than a type-check.
The difference between this and pure static analysis is that static analysis isn't something that's built into the language, and static analysis tries to guess intent. Static analysis says, "that looks shifty, let's alert someone." An language-level sandbox says, "I don't care about the intent, you have access to X and that's it."
Even in a dynamic language like JS, when people talk about stuff like the Realms proposal[0][1], they're talking about a system that's a lot closer to the above than they are about creating standalone services that would live in their own processes or threads.
This kind of style of thinking about security lends itself particularly well to functional languages and functional coding styles, but there's no reason it can't also work with more traditional class-based approaches as well -- you just have to be more careful about what you're passing around and what has access to what objects.
class Dangerous () {
unsafe_write (data) {
//unvalidated disk access
}
}
class Safe () {
public Dangerous ref = new Dangerous();
safe_write (data) {
validate(data);
ref.unsafe_write();
}
}
function main () {
Dangerous instance = new Dangerous();
(instance){
//I've just accidentally given my sandbox
//access to `unsafe_write` because I left
//a property public.
}
}
Even with that concern, worrying about my own references is still way, way easier than worrying about an entire, separate codebase that I can't control.
Ideally, though, we wouldn't have to all reimplement the wheel the n+1th time every time. The great power of software is that something can be written once and used over and over again, unlike the way each building needs to be built from the ground, each dinner has to be cooked from the ingredients every day etc.
To give up this kind of modularity and relying on other software engineers' work would throw the baby out with the bathwater.
Sure you need to apply judgement about whether a library seems legit, but the other end of the spectrum is the not-invented-here attitude, which is also bad.
The other question to ask yourself is if you want the dependency as a visible external thing, or do you want it cut & pasted into your code?
Just saying that "Dependencies are bad" means people are more likely to cut and paste that algorithm or bit of code into your application rather than taking it from some sort of package. In this sense you also do not know that it is a dependency, and you do not get any updates or bug fixes for it either.
Have to be careful about those unintended consequences there.
I agree with you, we need developers that take resposibility for their publications, review and test their codebase and all it’s dependencies, proper identification of ”real” published code (integrity check) and also the opt-in to place trust in different maintainers.
The value-add for SELinux is that the security boundary is no longer the user. Prior to SELinux a process running as `bob` is allowed to access anything that Bob himself can access.
It at least pushes the boundary to I want to allow X program to access Y instead of I want to allow X user to access Y.
Using a user account per-app only really works elegantly on single user systems where there are a small number of apps.
bob-firefox, bob-vim, alice-evolution, alice-calculator,... would be a nightmare to maintain compared to being able to apply policy to the program itself.
> bob-firefox, bob-vim, alice-evolution, alice-calculator,... would be a nightmare to maintain compared to being able to apply policy to the program itself.
This is how Android works, btw. Each app has its own UID.
And the standard SELinux policies don't solve this problem, anyway - when there are a small number of apps, you can give them each a context, but maintaining one for hundreds or thousands of apps is a nightmare. For web serving itself, quoting from https://linux.die.net/man/8/httpd_selinux :
The following process types are defined for httpd:*
httpd_cvs_script_t, httpd_rotatelogs_t, httpd_bugzilla_script_t, httpd_smokeping_cgi_script_t, httpd_nagios_script_t, httpd_dirsrvadmin_script_t, httpd_suexec_t, httpd_php_t, httpd_w3c_validator_script_t, httpd_user_script_t, httpd_awstats_script_t, httpd_apcupsd_cgi_script_t, httpd_nutups_cgi_script_t, httpd_munin_script_t, httpd_openshift_script_t, httpd_sys_script_t, httpd_dspam_script_t, httpd_prewikka_script_t, httpd_git_script_t, httpd_unconfined_script_t, httpd_t, httpd_helper_t, httpd_squid_script_t, httpd_cobbler_script_t, httpd_mediawiki_script_t*
SELinux works elegantly on a very lightly configured system where there are a small number of apps and you got them all from the distro.
> SELinux has been able to solve the problem of "if a policy says X can't get to Y, prevent X from getting to Y" for years. Regular UNIX permissions have been doing the same for decades. (Yes, SELinux and regular UNIX permissions take a different approach / let you write the policy differently, but that's the problem they're fundamentally solving; given a clear description of who to deny access to, deny this access.) Neither SELinux nor UNIX permissions nor anything else has solved the problem of "Actually, in this circumstance I mean for X to get to Y, but in that circumstance I don't, and this is obvious to a human but there's no clear programmable distinction between the cases."
I don't think even your examples are obvious to everyone. For example, if paramiko is installed as a dependency of something else, it's not clear that you want to grant it access to your keys. Further, you might want to grant access to some but not all keys. There are many nuances that are unique to the particular use case which I don't think are obvious. However, that doesn't explain the inability of tools to allow us to describe these relationships.
Is it possible the OS should be designed so only the OS can read the actual private key so then at least at some level some apps don't ever need to see the key, they just need permission to authenticate.
As described in a sibling thread, while that contains the damage, that doesn't actually solve the problem - you're still giving malicious code permission to authenticate.
Shoving everything into namespaces doesn't solve everything because at the end of the day those 'containers' are still regular 'ole processes running in the root namespace under UIDs that are valid in the root namespace.
Qubes instances are native virtualized via hardware-assisted virtualization, so there shouldn't be any ability for instances to access root UID processes unless the hardware virtualization solution's security fails (Intel's cache/branch prediction attacks).
that's a problem with linux not taking container security seriously. there are other, more secure, container implementations in different kernels. containers are a good abstraction, but linux does them poorly.
Without absolving us all of our responsibility to do better, to create systems that minimize the potential damage by bad actors, "I want to be able to download and execute random code off the internet and have it access my file system, my cameras, my network, my everything, but only in ways that don't hurt me", has always been kind of a pipe dream.
It's also kind of hilarious when the person downloading and executing random code off the internet isn't the naive user, the pointy haired boss, the stereotypical grandmother, the dumb tween, but the "sophisticated", computer-savvy, net-wise programmers.
If you are installing paramiko or ansible you know what you are doing. It should be end user conscious choice to give them access. You should not be prevented to give that access but also that access should not be granted without end user knowing it.
You don't want to instal "some library, from somewhere" to have automatically access to everything on your machine.
I also agree with all people that comment that it is solved problem in technical means. I someone installs random stuff it is like crossing street with closed eyes, you might not get hit by a car, but yeah chances are much higher then if you take your time and look around what you are doing.
The assumption that all users know what all processes, tools, or facilities are doing, at all times, and at all instances, has proved false far too many times.
You could argue that the user should know what they're doing, but then, drivers shouldn't crash cars, and pilots shouldn't crash aircraft.
Numerous elements of this problem are simply hard, perhaps impossible to resolve. If the problem is what Neal Stephonson called metaphor shear in "In the Beginning Was the Command Line", then the fundamental problem isn't technical, but that people generally are operating under a false mental model of what computers are, can do, and do behave.
Yes, "all models are false, some are useful". The utility of this one may be past its sell-by date.
That approach has two problems. First is that access isn't fine-grained enough - you often have to grant access to far more than you intend. Second is that there's no way to know why an app is asking for access, or to be certain what it's going to do with that access.
One nice thing about a virtualenv is that you get a copy of the Python interpreter in there, so it is in fact a separate executable running, for the purposes of hanging the policies off of.
Depending on the platform, virtualenv defaults to creating a symlink to the Python executable. You can override it with --copies, but then you have a new problem: updating the interpreter in all virtualenvs when a new Python release comes out.
When I pip install paramiko, I do, in fact, want it to have access to my SSH keys. When I pip install ansible, I want ansible to be able to shell out to OpenSSH to use my keys. If I write custom Python code that calls gpg, I want that custom code to be able to load libraries that I've pip installed without the gpg subprocess being blocked from loading my keys. If I have a backup client in Python, and I tell it to back up my entire home directory, I want it backing up my entire home directory including private keys.
If I wanted an OS where I couldn't install arbitrary code and have it get to all my files for my own good, I'd use iOS. (I do, in fact, use iOS on my phone because I sometimes want this. But when I'm writing Python code, I don't.)
SELinux has been able to solve the problem of "if a policy says X can't get to Y, prevent X from getting to Y" for years. Regular UNIX permissions have been doing the same for decades. (Yes, SELinux and regular UNIX permissions take a different approach / let you write the policy differently, but that's the problem they're fundamentally solving; given a clear description of who to deny access to, deny this access.) Neither SELinux nor UNIX permissions nor anything else has solved the problem of "Actually, in this circumstance I mean for X to get to Y, but in that circumstance I don't, and this is obvious to a human but there's no clear programmable distinction between the cases."
To be clear - I think there is potentially something of a hybrid approach between the status quo and what newer OSes do. For instance, imagine if each virtualenv were its own sandboxed environment (which could be "SELinux context" or could just be "UNIX user account") and so if you're writing code in one project, things you pip install have access to that code but not your whole user account. I'm just saying that SELinux hasn't magically solved this problem because all it provides is tools you could use to solve it, not a solution itself.