Hacker News new | past | comments | ask | show | jobs | submit | JJJollyjim's comments login

They mention that compiling one crate at a time (-j1) doesnt give the 7x slowdown, which rules out the object file/caching-in-rustc theories... I think the only explanation is the rustcs are sharing limited L3 cache.


The L3 cache angle is one of our hypotheses too. But it doesn't seem like we can do much about it.


It is in fact documented that you can't do this:

"Currently the default global allocator is unspecified. Libraries, however, like cdylibs and staticlibs are guaranteed to use the System by default.", however:

"[std::alloc::System] is based on malloc on Unix platforms and HeapAlloc on Windows, plus related functions. However, it is not valid to mix use of the backing system allocator with System, as this implementation may include extra work, such as to serve alignment requests greater than the alignment provided directly by the backing system allocator."

https://doc.rust-lang.org/std/alloc/index.html https://doc.rust-lang.org/std/alloc/struct.System.html


> such as to serve alignment requests greater than the alignment provided directly by the backing system allocator

Surely the system allocator provides memalign() or similar? Does Windows not have one of those?


Kind of but not really, the Windows heap doesn't support arbitrary alignments natively, so the aligned malloc function it provides is implemented as a wrapper around the native heap which over-allocates and aligns within the allocated block. The pointer you get back isn't the start of the allocation in the native heap, so you can't pass it to the standard free() without everything exploding.

I don't think fixing that mess was ever a priority for Microsoft because it's mainly an issue for C, and their focus has long been on C++ instead. C++ new/delete knows the alignment of the type being allocated for so it can dispatch to the appropriate path automatically with no overhead in the common <=16 byte case.


On macOS, aligned_alloc works starting from macOS 10.15.


Presumably the metabase instance also has credentials to access some databases, some of which may be have enough privileges to also get RCE on the database machines (as well as messing with the data they hold).


We issue separate read-only credentials for database access fortunately. Still doesn't remove the risk of all the data been exfiltrated though.


They haven't released the source, and the compiled versions are non-trivial to diff (e.g. there are nondeterministic numbers from the clojure compiler that seem to have changed from one to the other, and .clj files have been removed from the jar).

The old version has `hash=1bb88f5`, which is a public commit: https://github.com/metabase/metabase/commit/1bb88f5

Whereas the new version has `hash=c8912af`, which is not: https://github.com/metabase/metabase/commit/c8912af


I could be wrong (and often am), but I am seeing updates related druid client authentication.


I didn't even know you could have a "private" commit on GitHub/an open source repo like that.


Oh, I didn't mean to imply you can, just that it's 404... presumably it exists in a repo checked out on someone's machine, and maybe in a separate private Github repo.


This is silly on my end (I woke up early and have time to kill)...

Also like, note: I would never publicly disclose whatever I find, I'm just curious

I observed exactly what you said about the Clojure filenames not matching up, etc. etc.

    #!/bin/bash
    
    # Variables
    DIR1=~/metabase-v0.46.6.jar.src # decompiled with jd-cli / jd-gui (java decompiler)
    DIR2=~/metabase-v0.46.6.1.jar.src # decompiled with jd-cli / jd-gui (java decompiler)
    
    # Function to create fuzzy hash for each file in a directory
    create_fuzzy_hashes() {
      dir=$1
      for file in $(find $dir -type f)
      do
        ssdeep -b $file >> ${dir}/hashes.txt
      done
    }
    
    # Create fuzzy hashes for each file in the directories
    create_fuzzy_hashes $DIR1
    create_fuzzy_hashes $DIR2
    
    # Compare the hashes
    ssdeep -k $DIR1/hashes.txt $DIR2/hashes.txt
How far do you think this gets us (fuzzy hashing)?

I was thinking this, or binary diffing the .class (instead of the "decompiled" .java)?


I found something which is clearly a security fix, using the same idea but more naive: just diffing at the lengths of the decompiled files. It's not at all clear how the issue I found would be triggered by an unauthenticated user though.


llama.cpp runs on the CPU, not the ANE or GPU.


Shodan lists 125,000 HA installs exposed to the internet (though I don't know how accurate that statistic is, nor what fraction have a Supervisor) https://www.shodan.io/search?query=product%3A"Home+Assistant...


Wouldn’t it have to be an exposed supervisor port to the internet? It’s not enough to just be running supervisor and having home assistant exposed, right?


I don't believe so – from a quick glance it seems that supervisor requests are proxied through the normal HA port, and the supervisor doesn't have it's own port except for the Observer (which seems to be a simple read-only thing).


As a random example, if your Microsoft account is OAuthed to a GitHub login, and you log in through that, the popup browser just takes you back to a Microsoft account settings page instead of handing the OAuth flow back to Minecraft


[co-author of the research here]

They actually approximate this functionality in the Windows implementation: It checks netstat to enforce that incoming TCP connections are from the expected Windows user! https://github.com/tailscale/tailscale/blob/2a991a3541ae5d56...

That's why we were happy with the solution they implemented as a stopgap, until they could switch to named pipes (which there is now an open PR for).


Huh, ok, that's not so bad then.

It feels like there could still be a TOCTOU issue there, but it'd be difficult to use.


Generally speaking, allowing privileged operations because a specific user asked over a TCP socket is asking for trouble: there are quite a few ways that unwitting processes could open a socket on behalf of an attacker without realizing that it is asserting its identity and thus granting privilege.

All the major cloud get this IMO entirely wrong with their services that issue secrets to instances (e.g. AWS IDMS).


With tcp being connection-oriented I think it's not too hard to get right, especially if the OS won't reuse a closed socket right away. Definitely worth considering though. Of course it's doable without netstat if you can track down the right apis https://stackoverflow.com/questions/47659365/find-process-ow...


I'd also like to see motivating examples of specific things it does find (that LLVM doesn't), even if that's not a representative benchmark


Android's has an app signing system which isn't dependent on Google Play. Updates to a given app have to be signed with the same certificate as previous versions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: