> Without having to re-render the whole table on each change.
Not quite sure what the author means by that. Re-rendering pnly happens when the current task queue elemt has been processed. Never while JS is running (aside from webworker and the like). I would honestly be surprised if this API had much (if any) performance benefits over createElement.
If you ever spens time with the low level SAP GUIs, then yes, you will find out why that's definetly a bad thing. Software should reflect users processes. The code below is just an implementation detail and should never impact the design of the interfaces.
Two, even if we did, DOMPurify is ~2.7x bigger than lit-html core (3.1Kb minzipped), and the unsafeHTML() directive is less than 400 bytes minzipped. It's just really big to take on a sanitizer, and which one to use is an opinion we'd have to have. And lit-html is extensible and people can already write their own safeHTML() directive that uses DOMPurify.
For us it's a lot simpler to have safe templates, an unsafe directive, and not parse things to finely in between.
A built-in API is different for us though. It's standard, stable, and should eventually be well known by all web developers. We can't integrate it with no extra dependencies or code, and just adopt the standard platform options.
Are you certain that this is secure? What about parsing depth/DOM clobbering, etc?
See https://mizu.re/post/exploring-the-dompurify-library-bypasse... for an example of why this is really hard. Please do not roll your own sanitizers; DOMPurify has very good maintenance hygiene, and the maintainer is an expert. I have reported a bunch of issues and never waited for more than two hours for a response in the past.
He is also one of the leading authors of the specification behind `setHTML`.
My code accepts only a very limited subset of HTML tags and their respective attributes. (<a>, <img>, <font>, <br>, <b>, <strong>, <i>, <em>, <del>, <s>, <u>, <p>, <hr>, <li>, <ul>, <ol>).
I could easily add more, like headings or tables. Just decided to not overwhelm the readers. But all of the allowed elements / attributes here are harmless. When I'm copying them, I'm only copying the known safe elements and attributes (forbids unknown attributes, including styles/scripts, event handlers, style attributes, ids, or even classes). I have fine control over the allowed elements / attributes and the structure. This makes things much easier. For a basic html content management this kind of filtering is fine since DOMParser actually does the heavy lifting.
Sure, DomPurify is powerful and handles much more complex use cases (doesn't it also use DOMParser though?), no doubts about that. But a basic CMS probably has to handle basic HTML text elements. I guess inline SVG sanitation is more complicated (maybe just use ordinary <img> instead?).
If you have some html example that will inject js/css or cause any unexpected behavior in my code example, please provide that HTML.
The app developers can still use that right now, but if the framework forces it's usage it'd unnecessarily increase package size for people that didn't need it.
Unlikely. If a company does not have a formal BBP, they won't pay 99.99% of the time. Brokers are also not interested in vulnerabilities in companies. They usually only buy vulnerabilities for standard software (components).
Again, there really isn't a big market for such vulnerabilities. No 0day broker will buy the vulnerabilities listed in the article. They might be able to sell to an initial access broker, but even there rhe kinds of vulnerabilites are not really interesting to them.
If that’s the case, then why do companies run bug bounties?
I’m asking earnestly; it seems like if nobody actually cares about these gaps then there shouldn’t be an economic driver to find them, and yet (in many companies, but not Burger King) there is.
Is it all just cargo culting or are there cases where company vulnerabilities would be worth something?
Oh no. They do get exploited. Just not bought. Buying vulnerabilities is by itself time intensive, complex work. grey market escrow, finding trusted sellers and buyers, etc. So buying and selling bulnerabilities only really happens for really impactful und generally useful ones.
They are heavily used in penetrationtests and red teaming engagements. Banning such tools from the public just mystifies attackers ways to defenders, while not in any way hindering serious malicious actors. We had that discussion back in the 90s and early 2000s.
Agreed. Plus it's not always a clear line between offensive and legitimate usage. For many years nmap was banned on most corporate networks, but it's an invaluable tool for legitimate use too, despite being useful for offensive cases as well
It's mainly beside nmap detection is a feature of most IDS so it's bound to raise some red flags.
Same with even doing packet sniffing. It can be detected when using wireshark because it does reverse DNS lookups for each ip it sees in its default configuration.
I had legit reasons for it at work so I always mentioned it to the network guys before ding stuff like this. We also had a firewalled lab network. We did get some pushback once when some scans leaked out to the office network. But it was their fault for having the firewall open.
I ran 'neoprint.php' on myself at Facebook in 2007 and immediately got a stern email about it... It was some script that collected info for responding to law enforcement requests. But after chastising me, the email said "I was gratified that you ran it on yourself". (as opposed to snooping on someone else!)
It was just a summer internship and FB was like 'only' 80 engineers back then. But they still took it seriously.
I think that's a little different. It sounds like neoprint.php is an internal Facebook tool for looking up data on Facebook users. So improper usage of it is a privacy problem for users. It's something misbehaving employees might run against celbrities, exes, etc. (e.g. https://www.gawkerarchives.com/5637234/gcreep-google-enginee... )
Otoh nmap isn't a privacy problem for users of Facebook (or any other tech company).
Yea totally agree. Mainly just wanted to shoehorn in my own story about stern emails at FB! Also I think running nmap on your own development machine is totally legitimate. Lots of reasons you might want to do it.
+1. If I can't run nap or netcat, or have to justify it each time, I can't do my job. Better off elsewhere.
I've departed early at least twice over this. Draconian IT serves nobody. Been doing this long enough I deliberately poke any new employer; see what's in store.
Nobody cares, though. EDR appliances sell without careful administration. The industry will outlive us all.
While that may be true, it’s less true for things like cobalt strike. I’m not saying that banning tooling would be a good thing, but it’s a bad argument to compare Nmap to remote access tools.
I don't disagree, but GP is asking about all offensive tools, not just Cobalt strike. IMHO a platform like GitHub should not be picking and choosing which projects are offensive enough to remove. Yes, there are some tools that are pretty clearly more offensive than others, but creating a policy would not be clear-cut
Cobalt strike is just an automated script kiddie really. It's a way for red teamers to catch low hanging fruit. And because of that, there's not so much low hanging fruit anyway.
The "generational guilt" theory does not check out to me at all. Coming from central Europe, I mostly hear about these rethorics from English-language sources. In non-english European media generational guilt for colonization is hardly a thing in my experience.
Ars article links to Malwarebytes but Ars article is better. The headline is better, it's most interesting that they run code from svg. Ars also adds context how the same hole was also used before to hijack Microsoft accounts and also by the Russians. Whereas Malwarebytes is mostly about pornsite clickjacking to like Facebook posts (and complains about age verification). However it has a bit more technical details too. Read both I guess?
Not my son, but I did teach my younger brother programming. From when he was about 10 to when he was about 14. I started out when he was showing interest in my programming work. I ended up gifting him a book on programming for kids. Then nudging him into working on it every now and again and helping him out when he had issues. Mostly my goal was to make him motivated to learn (showing him interesting projects I had been working on, etc.). From my experience with motivation and time the skills will come themselves, without motivation, every attempt is pointless.
It was a slow burner but over the course of four years he ended up learning quite a lot. Now being one of the best programmers in his college.
...or that anyone who thinks "I'd start a company if I could become the next Apple, but otherwise it's pointless" is someone you want running a company.
I am going to the Defcon CTF Finals at the Defcon conference this year. Coming from Europe, I know of multiple people who will participate remotely because of the political climate. I would have to lie if I said that I didn't think about skipping the USA either. In the cybersecurity space especially things have always been difficult.
I am competing as well. Coming from Europe, it's still a bit uncertain which team members can and are willing to travel to the USA in the current political climate.
Not quite sure what the author means by that. Re-rendering pnly happens when the current task queue elemt has been processed. Never while JS is running (aside from webworker and the like). I would honestly be surprised if this API had much (if any) performance benefits over createElement.