Short version: Jevons's paradox means that the more coding you automate away for developers (like with compilers in the past), the more in-demand those developers are. (Until AGI when all bets are off, of course.)
There's cope in the comments about possibility of some software adjacent jobs remaining, which is possible, but the idea of a large number of high paying software jobs remaining by 2030 is a fantasy. Time to learn to be a plumber.
Even if you don't think AI will replace the job of software developer completely, there's no way compensation is going to stay at the current level when anyone can ship code.
I came across this the other day but I couldn’t really grok how it works. Does it run at a higher privilege level than the workflow or the same? Can a sophisticated enough attack just bypass it?
I spent a few seconds clicking into it before the newfound 429 responses from GitHub caused me to lose interest
I believe a sufficiently sophisticated attacker could unwind the netfilter and DNS change, but in my experience every action that you're taking during a blind attack is one more opportunity for things to go off the rails. The increased instructions (especially ones referencing netfilter and DNS changes) also could make it harder to smuggle in via an obfuscated code change (assuming that's the attack vector)
That's a lot of words to say that this approach could be better than nothing, but one will want to weigh its gains against the onoz of having to keep its allowlist rules up to date in your supply chain landscape
Hey, I'm one of the co-author of Bullfrog. As you say, a sophisticated and targeted attack could indeed bypass our action. It's meant for blocking mostly opportunistic attacks.
I don't think any egress filtering could properly block everything, given actions will need to interact with Github APIs to function and it would always be possible to exfiltrate data in any private repo hosted on Github. While some solutions can access the outbound HTTP requests payload before it gets encrypted using eBPF, in order to detect egress to untrusted Github org/repos, this isn't a silver bullet either because this relies on targeting specific encryption binaries used by the software/OS. A sophisticated attack could always use a separate obscure or custom encryption binaries to evade detection by eBPF based tools.
So like you say, it's better than nothing, but it's not perfect and there are definitely developer experience tradeoff in using it.
PS: I'm no eBPF expert, so I'd be happy if someone can prove me wrong on my theory :)
Yep, and there's an opt-in to disable sudo which prevents circumvention. However this can break some actions especially ones deployed as Docker images. It also doesn't work with macos.
I'll have to do something with my hands, kitchen skills are probably the only manual labor I can do better than average.
It's not a matter of fantasizing about an idealized version of a dirty, hard blue collar job, but an honest assessment of something that will last longer than coding and will pay me money to feed myself.
Do these challenges apply to surgical robots? There's a lot of interest in essentially creating automated Davincis, for which there is a great deal of training data and for which the robots are prepositioned.
Maybe all this setup means that completing surgical tasks doesn't counter as dexterity.
What's the pricing? Do you take the entire 3% commission? If so, ouch, and do you anticipate this will be affected by the recent anti trust cases?
Is your use of AI just chatgpt API calls? How quickly could you scale given there's still humans in the loop? What are the blockers to getting humans out of the loop?
Our pricing is negotiable. What we're seeing so far is that >95% of homes are still offering full 2.4% buyer agent concessions in the bay area.
Our main service is our texting service which you would text as if it was a normal realtor and it can get you market comparisons, schedule tours by texting agents, asking about if there's offers on homes, etc.
Blockers for getting humans out of the loop is primarily just user trust. It's quite simple to pull APIs that get specific data and text people and fill forms, but real estate agents have a very high CAC problem, due to this trust issue and competition.
MVP has completed 1 escrow via text bot, we got paid 27k, another home went into escrow yesterday, 35 clients touring homes with us. We registered with our real estate brokerage in July.
> MVP has completed 1 escrow via text bot, we got paid 27k, another home went into escrow yesterday, 35 clients touring homes with us. We registered with our real estate brokerage in July.
That's amazing traction! Congratulations!
How much manual intervention did these first couple of sales take? Are you using your own local/hosted model and fine tuning or otherwise incrementally improving it?
Disclosures analysis and text-to-tour scheduling were first to be automated, and those were available for our first close. We now have market analysis, offer drafting, full AI texting, etc.