They went and hired ransomware-as-a-service hackers and sold out their respective charges in exchange for 80% of the ransom.
They had degrees and certifications and job experience with big name firms, and they were dumb as bricks. I think it's a gold plated example of modern credentialism. We're churning out hordes of "certified" idiots getting green-lit by pedigreed managers and MBAs following "successful patterns" and nobody has a damn clue how things work or why. And we let them vote.
Yeah, you are right. Android is a variant of Linux.
Just like MacOS and iPadOS, both run Darwin kernel, but they are very different. Even though iPads have very capable hardware, there are a lot you cannot do on iPadOS.
The same applies to Linux. People want a real desktop operating system.
Unfortunately, when people call their os Linux, they typically are referring to its kernel. I wish there was a better term to distinguish Linux distros from the kernel.
I do have a couple of asserts in my code, but that does not explain why every stack trace begins and ends with "assert()"
Actually I call syscalls from assembly, and my _start() is also an assembly function. That is why this happens I guess. strace somehow doesn't play well with code written in .s file?
The backtrace code probably won't, as it'll be depending on the C calling convention to generate it. It's also possible assert() is just the function at the start or end of your text section, so any address that falls outside the range in that direction gets attributed to it, including values the stack walker incorrectly thinks are addresses.
> Please don't be a dev with access to things my security depends on.
Devs just want “Do Anything Now mode”, who doesn't? :)
I doubt at most shops VSCode extension security is considered differently from any extensions or libraries used throughout the codebase.
Almost nobody reads every line of their includes. (Sadly, but obviously.)
If all your includes can come only from an artifact depot or mirror because your build system can't reach the Internet, and those only get there with a PR and code review, you're on a good start. Operationalizing that to stay current with zero day fixes and the like, starts to be "a thing", which is why there's so much energy in software-supply chain security, through industry efforts like SLSA and GUAC and firms like https://www.kusari.dev/.
Read a decent checkpoint and overview on this emerging ecosystem here (from 2022):
Confusing marketing and description about it are unfortunate - DevKit provides Visual Studio style solution and text explorers, and maybe additional features since last time I verified this.
In order to write C#, however, you do not need that - base C# extension that has no licensing strings attached is the actual language server you need. And solution explorer, should you want that, can be restored with Ionide which doubles as a language server for F#.
And, hey, at least it's not like with Oracle where SDK distribution itself that has a trap :)
Only if the user base is large enough to gather enough interest. I expect someone reverse-engineering popular devices like nintendo switch but not bionic eyes used by a few hundred people.
I originally started with Intel syntax when learning assembly since it was just in my textbook. I sometimes ran into weird syntax issues I didn't yet understand.
For example, why "mov eax, [ebx + 2 * ecx + 4]" is allowed, but not "mov eax, [ebx + ecx + edx]" (1)? Or maybe "mov eax, [ebx + 3 * ecx]" (2)? Or maybe "mov eax, [2 * ebx + 4 * ecx]" (3)?
As long as it is a math expression it should work right? Why did the compiler keeps telling me the expression is invalid?
Later when I learned about AT&T syntax, everything started to make sense. The syntax ensures you cannot construct (1) and (3), and when trying to use (2) it explicitly tells you it expects 1, 2, 4, or 8 but got 3.
I started to use AT&T syntax since then.
In my opinion, there is one most confusing part about AT&T syntax, which is the condition based instructions. I guess the original author of the article did not do much programming with AT&T syntax so they did not notice.
cmpl %eax, %ebx
jae label
Now tell me if eax is bigger, should you jump to label?
Instead, with Intel syntax it is pretty straight forward. The "jae ..." following "cmp ebx, eax" translates to "if (ebx >= eax) goto ..."
(BTW, the cat or whatever following your mouse pointer is super annoying)
>For example, why "mov eax, [ebx + 2 * ecx + 4]" is allowed, but not "mov eax, [ebx + ecx + edx]"
Because it's translated into a single machine instruction!!! "mov eax,[[ebx]]" doesn't work either - not on x86 at least, historically there were architectures that had indirect memory references :)
It's still MUCH easier to read and write. How do you even remember the order in AT&T syntax? Shouldn't the scale factor maybe be given as a shift count since that's how the hardware works? By using familiar arithmetic expressions, it becomes perfectly clear what is meant, and anyone who actually writes assembly will quickly learn what forms are allowed. And if you are only reading the code, it doesn't matter.
Of course, AT&T syntax comes from UNIX, where the zeroth commandment is "Thou shalt have no other programming languages but C and shell scripts". People who have invested the effort to memorize something like 42 different levels of operator precedence, and fluently read and write symbol vomit like "(((void *)(int [])fn(foo,bar=baz==quux++))" are obviously desperate to make assembly look EVEN MORE complicated...
> Because it's translated into a single machine instruction!!!
Actually we never learned the x86 binary representation of instructions in that assembly language class. The textbook also did not cover that.
All I wanted to say is: Intel syntax hides the fact that there are only 4 things in address calculation: displacement, base, index, scale. The compilation error is also hard to understand (at least for the compiler I used). It says something like "the expression is invalid" but you never know what went wrong.
AT&T syntax exposed the underlying requirement, and the compilation error is easy to understand.
Now I am okay with both AT&T and Intel, but when I was learning, I appreciated AT&T syntax more. Assembly is mandatory for CS major in that college, and AT&T syntax made my semester easier.
It's easy enough to learn "base + index*scale +- displacement". And it can be written in exactly that way, in whatever order makes the meaning of the code clearest, as you would do it in a higher level language.
AT&T syntax forces you to learn this before writing or even reading a single line of code that references memory, instead of giving the illusion that maybe something like [eax+ebx+ecx] or [[eax]] was also allowed. I don't think that's very helpful. It also forces you to learn a very specific way of writing it that is completely unintuitive.
I'm somewhat sympathetic to the argument that assembly syntax should correspond to the underlying hardware, just not to such an extreme. For example, I prefer 8080 over Z80 for that reason (one mnemonic for each addressing mode).
Well, you already know this requirement, so it is not a problem for you. However for learners, who don't have even the remotest idea of machine instructions, all they get is some invalid expression error messages. Same message for all (1) (2) and (3) cases I listed above.
Maybe Intel syntax works better for experienced programmers. But at least for me, when I was a newbie to assembly, AT&T was better than Intel syntax. It made learning process easier.
> AT&T syntax forces you to learn this before writing or even reading a single line of code that references memory
I do believe that rules forced by language is a good thing. At least it helped me understand why (1) (2) and (3) didn't work
Many people believe Rust can help people write safer code. Why? Because of its rules. Incorrect ownership will be discovered by the borrow checker so a compilation error will force the programmer to correct it.
Maybe an experienced C++ programmer will be able to handle memory management correctly in C++, and loves the extra freedom that C++ brings. But Rust can be helpful for learners. It provides clear error message helping them understand why their code is wrong. While in C++? Segmentation fault.
(Needless to say, Rust also helps experienced programmers. Even experienced programmers write buggy C++ code.)