They are for large infrastructure projects, especially at large organisations.
It takes companies 3-5 years for migration of these products, all of which are not CapEx funded and so get minimal resourcing without prioritisation by leadership.
The same way you write malware without it being detected by EDR/antivirus.
Bit by bit.
Over the past six weeks, I’ve been using AI to support penetration testing, vulnerability discovery, reverse engineering, and bug bounty research. What began as a collection of small, ad-hoc tools has evolved into a structured framework: a set of pipelines for decompiling, deconstructing, deobfuscating, and analyzing binaries, JavaScript, Java bytecode, and more, alongside utility scripts that automate discovery and validation workflows.
I primarily use ChatGPT Pro and Gemini. Claude is effective for software development tasks, but its usage limits make it impractical for day-to-day work. From my perspective, Anthropic subsidizes high-intensity users far less than its competitors, which affects how far one can push its models. Although it's becoming more economical across their models recently, and I'd shift to them completely purely because of the performance of their models and infrastructure.
Having said all that, I’ve never had issues with providers regarding this type of work. While my activity is likely monitored for patterns associated with state-aligned actors (similar to recent news reports you may have read), I operate under my real identity and company account. Technically, some of this usage may sit outside standard Terms of Service, but in practice I’m not aware of any penetration testers who have faced repercussions -- and I'd quite happily take the L if I fall afoul of some automated policy, because competitors will quite happily take advantage of that situation. Larger vuln research/pentest firms may deploy private infrastructure for client-side analysis, but most research and development still takes place on commercial AI platforms -- and as far as I'm aware, I've never heard of a single instance of Google, Microsoft, OpenAI or Anthropic shutting down legitimate research use.
I've been using AIs for RE work extensively, and I concur.
The worst AI when it comes to the "safety guardrails" in my experience is ChatGPT. It's far too "safety-pilled" - it brings up "safety" and "legality" in unrelated topics and that makes it require coaxing for some of my tasks. It does weird shit like see a security vulnerability and actively tell me that it's not really a security vulnerability because admitting that an exploitable bug exists is too much for it. Combined with atrocious personality tuning? I really want to avoid it. I know it's capable in some areas, but I only turn to it if I maxed out another AI.
Claude is sharp, doesn't give a fuck, and will dig through questionable disassembled code all day long. I just wish it was cheaper in API and had higher usage limits. And, also that CBRN filter seriously needs to die. That one time I had a medical device and was trying to figure out its business logic? The CBRN filter just kept killing my queries. I pity the fools who work in biotech and got Claude as their corporate LLM of choice.
Gemini is quite decent, but long context gives it brainrot. Far more so than other models - instruction following ability decays too fast, it favors earlier instructions over latter ones or just gets too loopy.
I’d be really interested to see what you’ve been working on :) are you selling anything? Are you open sourcing? Do you have any GitHub links or write ups?
Why not just expose an HTML representation of the data? Why must it remain JSON, XML, CSV, Parquet, fixed length or tab delimited files, ProtoBuf, etc?
API’s should provide content in the format asked of them. CSS should be used to style that content.
This is largely solved in RFC-6838 which is about “how media types, representation and the interoperability problem is solved”. https://datatracker.ietf.org/doc/rfc6838/
Already supported by .NET Web APIs, Django, Spring, Node, Laravel, RoR, etc.
Less mature ecosystems like Golang have solutions, they’re just very much patch-work/RYO.
Or even use OpenResty or njs in Nginx, which puts the transformation in the web service layer and not the web application layer. So your data might be JSON blob, it’ll convert to HTML in real-time. Something similar can be achieved elsewhere like Apache using mod_lua etc.
I think bastardising one format (HTML), to support another format (JSON), is probably not the right move. We’ve already done that with stuff like media queries which have been abused for fingerprinting, or “has” CSS selectors for shitty layout hacks by devs who refuse to fix the underlying structure.
Rather than adding an HTML endpoint in addition to the XML or JSON, expose the data and link it to stylesheets that dynamically render the HTML client side.
That's the whole point of XSLT, ship the data and tell the browser how to transform it to HTML.
Because so you can separate the data from the layout. You can e.g. return a list of strings and then the strings become the summaries of a set of details elements.
The description you give inherently changes the structure of the data, and JavaScript would be the best way to post-process it. CSS is about styling the structure of HTML, not structural changes to it.
Unless you have a good example, I think you’re coming at this from an “everything’s a nail if the only tool I have is a hammer”.
CSS if for styling of semantic structure of HTML, XSLT is a language to convert normalized data to semantic structure. That's what I gave an explanation about, I wasn't talking about CSS.
This is like a guerrilla solar recipe from the anarchist’s cookbook.
The author doesn’t explicitly dissuade people from plugging in another multipoint/powerstrip/plugstrip into the end of the extension cable you’ve run into the other room. So I will. Don’t do that. There are plenty of low gauge, cheap extension cables out there which will degrade fast in this setup, and may cause a fire.
Also, if your landlord is okay with seeing this setup they probably don’t have insurance they’re worrying about, and are simply making sure you’re not actively destroying the property (rather than potentially destroying it with the fire hazard).
Good news! I looked on Amazon and I found one that goes all the way up to 18! It also says ‘fully copper coated’ so you know it’s good. (/s, please never do this)
I often downvote posts like this, on the grounds that excessive safety nannyism doesn't belong on a site called Hacker News... but having seen what they call a "2500W power distribution strip," yeah... have an upvote or three.
Assuming the linked products are the products in the picture, the strip is a product from Southwire that is claimed to be rated for 20 amps / 2500W and southwire is an established and known brand. It is listed as being for “temporary” installations and I’m not sure I’d want to run that load through it all the time, but it’s probably not that bad.
It's 120V. Pushing 20A continuous through that kind of wiring is less than ideal.
First of all, let's assume less than ideal conditions so base our calculations on 115V. 2,500 watts is going to be 21.7 amps; assuming a continuous load (which is pretty reasonable for a whole house) is going to need a breaker and wiring that's rated for 125% of that, or 27.2A.
That means the supply needs to be #10 wiring and should be fitted with a 30A breaker at the disconnect. A temporary power tap is not a suitable disconnect. And I highly doubt it's got 10 gauge wiring.
Dont fuck with electricity. I think the intuitions people have are based on home installations with RCDs, fuses and earthing and proper cabling before it gets to that socket. You then plug in a distribution to draw 100w for your devices and maybe the occasional 2kw for the vacuum cleaner for 5 minutes if too lazy to use another socket. Yoy not running full house load through it all day.
AgenticSeek, or you can get pretty far with local qwen and Playwright-Stealth or SeleniumBase integrated directly into your Chrome (running with Chrome DevTools Protocol enabled).
It is strange to launch this type of functionality with not even a privacy policy in place.
It makes me wonder if they’ve partnered with another of their VC’s peers who’s recently had a cash injection, and they’re being used as a design partner/customer story.
Exa would be my bet. YC backed them early, and they’ve also just closed a $85M Series B. Bing would be too expensive to run freely without Microsoft partnership.
Get on that privacy notice soon, Ollama. You’re HQ’d in CA, you’re definitely subject to CCPA. (You don’t need revenue to be subject to this, just being a data controller for 50,000 Californian residents is enough.)
An 8% increase y/o/y is quite substantial, however keep in mind globally we experienced the 2022 fuel shock. In Australia for example we saw energy prices double that year.
Although wholesale electricity prices show double-digit average year-on-year swings, their true long-run growth is closer to ~6% per year, slightly above wages at ~4% during the same period.
So power has become somewhat less affordable, but still remains a small share of household income. In other words, wage growth has absorbed much of the real impact, and power prices are still a fraction of household income.
You can make it sound shocking with statements like “In 1999, a household’s wholesale power cost was about $150 a year, in 2022, that same household would be charged more than $1,000, even as wages only grew 2.5x”, but the real impact (on average, obviously there are outliers and low income households are disproportionately impacted in areas where gov doesn’t subsidise) isn’t major.
I wouldn’t call a $100-270 electric bill a “fraction” when it’s about 5% post tax income. I use a single light on a timer and have a small apartment
Especially since these sorts of corporations can get tax breaks or har means of getting regulators to allow spreading the cost. Residential shouldn’t see any increase due to data centers, but they do, and will, supplement them while seeing minimal changes to infrastructure
When people are being told to minimize air conditioning but then these big datacenters are made and aren’t told “reduce your consumption” then it doesn’t matter how big or small the electric bill is, it’s supplementing a multi billion dollar corporation’s toy
Here’s a one-liner for node devs on MacOS, pin your versions and manually update your supply chain until your tooling supports supply chain vetting, or at least some level of protection against instantly-updated malicious upstream packages.
Would love to see some default-secure package management / repo options. Even a 24 hour delayed mirror would be better than than what we have today.
The expected secure workflow should not require an elaborate bash incantation, it should be the workflow the tools naturally encourage you to use organically. "You're holding it wrong" cannot be possible.
Accidentally installing a malicious package in your dev environment, the concern isn’t “what’s already installed”, it’s what’s potentially going to be installed in the future by you or your colleagues.
So, you pin the version and update periodically when security issues arise in your dependencies.
It takes companies 3-5 years for migration of these products, all of which are not CapEx funded and so get minimal resourcing without prioritisation by leadership.
reply