It was nighttime in Singapore when the ruling was announced. My husband and I scrambled to find a flight back. The best we could find, at any price, lands 25mins after the deadline.
I feel for you. I just wonder, at this point why would someone look to go back to the US "At any price", given how bad are they being treated? From what I can see, it seems most of us non-US people are "persona non grata" in the US.
I myself am and live in a so called "shithole country ". But specially because of my Technical skills, I've got plenty of opportunities over here. I would never think on living in the USA. Even though I easily could via TN visa. But it's clear US people dont want me living there.
And even if you decide it's time to leave, you'd still want to come back and settle your affairs and plan a proper move. You wouldn't want to leave everything behind, especially if you only brought enough for a brief trip.
That's an increasing consideration for people thinking about moving to the US or those who aren't settled there yet. But, of course, people who already have family and belongings there will want to get back in to at least sort those things out before leaving for good.
I did move away from the US because of these reasons, and it's been a good decision in retrospect. But no one likes uprooting their entire life and it takes years to build a new one somewhere else.
The calculus on immigrating to the US today is clearly negative, but many people immigrated 5/10/20+ years ago before all this shit and have lives there. They did not know any of this would happen.
I'd rather just have waited until an injunction or something next week. The guidance from my company is either make it back before the deadline, or stay where you are until further notice.
Rushing to the US and getting detained by border patrol in a foreign country isn't exactly a shinier alternative at this point. I'd take my risks with my job over my life in those shoes.
the whole US visa morass is complicated and volatile enough that a lot of large companies have dedicated teams who help advise their employees on visa issues and how to best navigate them. this "guidance" is basically saying "this is our lawyers' best guess as to how to stay safe over the next few weeks"
that was not at all certain yesterday, and even now there's the constant fact that a border agent can decide to be nasty and use this as a pretext to deny you entry, with no real recourse on your part.
as a parallel example, trump recently decided that you could no longer get your visa stamped in a third country (which a lot of indians did as a matter of course, because wait times for an appointment can be very high back home). there was an explicit carve out for people who had already made appointments at some third country embassy, but a lot of those people went to their stamping appointment and not only did not get the renewal but had their existing visa cancelled (which is apparently within the powers of the embassy official), so they could not even return to the US while waiting for an appointment date in their country of residence, and are basically on unpaid leave right now (best case scenario, would not be too shocked if some of them lose their jobs if they are away for too long).
I don't understand your comment. My reply was today, in response to a comment that was posted less than an hour before mine, both hours after this was announced. How is yesterday relevant?
microsoft sent their letter out when it was highly likely the new diktat applied to existing visa holders too. they had very little time to respond if they wanted to make sure people got home before the absurdly short deadline.
That's not generally true, of course. It requires they're legally employed, and have the proper work visas. I was confused. I thought the company was doing something shady, for it to matter, since it doesn't apply to current visa holders.
Via what mechanism? Will they be ready to accept the payments a few hours from now? Ready to process the re-entry with procedures that aren’t even developed yet?
Getting into the country before the deadline is the only safe way to avoid the uncertainty and ensure you don’t get stranded out of the country or in an airport for days or weeks while the process is developed.
This hastily constructed and implemented executive order is a terrible way to run a country
lived 5yr on L1A. It's a week to leave the country if laid off. But at the same time, most of the penalties/costs fall on the sponsoring company for all cases.
ICE black shirts make it more uncertain on enforcement, but there's still laws.
The expected value for immigrants is rapidly shifting into it being more favorable to be an illegal, because ICE/CBP is mostly going after low hanging fruit of easy to catch people that they know about with homes and salary jobs / university and a visa. People that are off paper and 100 miles past the border are as good as gone. So basically what we get is the exact opposite of what we want.
Your own premise destroys your argument. If they're grabbing 'whoever' it's at least as easy to grab immigrants with a paper record as those that don't.
It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.
In some situations, the “logical” block size can differ. For example, buffered writes use the page cache, which operates in PAGE_SIZE blocks (usually 4K). Or your RAID stripe size might be misconfigured, stuff like that. Otherwise they should be equal for best outcomes.
In general, we want it to be as small as possible!
> It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.
NVMe drives have at least three "hardware block sizes". There's the LBA size that determines what size IO transfers the OS must exchange with the drive, and that can be re-configured on some drives, usually 512B and 4kB are the options. There's the underlying page size of the NAND flash, which is more or less the granularity of individual read and write operations, and is usually something like 16kB or more. There's the underlying erase block size of the NAND flash that comes into play when overwriting data or doing wear leveling, and is usually several MB. There's the granularity of the SSD controller's Flash Translation Layer, which determines the smallest size write the SSD can handle without doing a read-modify-write cycle, usually 4kB regardless of the LBA format selected, but on some special-purpose drives can be 32kB or more.
And then there's an assortment of hints the drive can provide to the OS about preferred granularity and alignment for best performance, or requirements for atomic operations. These values will generally be a consequence of the the above values, and possibly also influenced by the stripe and parity choices the SSD vendor made.
Why would you want the block size to be as small as possible? You will only benefit from that for very small files, hence the sweet spot is somewhere between "as small as possible" and "small multiple of the hardware block size".
If you have bigger files, then having bigger blocks means less fixed overhead from syscalls and NVMe/SATA requests.
If your native device block size is 4KiB, and you fetch 512 byte blocks, you need storage side RAM to hold smaller blocks and you have to address each block independently. Meanwhile if you are bigger than the device block size you end up with fewer requests and syscalls. If it turns out that the requested block size is too large for the device, then the OS can split your large request into smaller device appropriate requests to the storage device, since the OS knows the hardware characteristics.
The most difficult to optimize case is the one where you issue many parallel requests to the storage device using asynchronous file IO for latency hiding. In that case, knowing the device's exact block size is important, because you are IOPs bottlenecked and a block size that is closer to what the device supports natively will mean fewer IOPs per request.
Just to add my two cents—I’ve been writing Go professionally for about 10 years, and neither I nor any of my colleagues have had real issues with how Go handles errors.
Newcomers often push back on this aspect of the language (among other things), but in my experience, that usually fades as they get more familiar with Go’s philosophy and design choices.
As for the Go team’s decision process, I think it’s a good thing that the lack of consensus over a long period and many attempts can prompt them to formally define a position.
I suspect a lot of us don’t have strong feelings either way and don’t find the verbosity “hostile”. No need for Stockholm syndrome if you don’t feel like a prisoner.
Of course you may have been joking, in which case “haha”. xD
If you say so. For me it's always been the opposite - I'm excited at the start about all the cool features, then slowly get disillusioned because of the warts.
I appreciate the argument that things can often be difficult for noobs but actually fine or better than alternatives once you get used to them.
But on the other hand, people who are "used to the way things are" are often the worst people to evaluate whether changes are beneficial. It seems like the new people are the ones that should be listened to most carefully.
I'm not saying the Go team was wrong in this decision, just that your heuristic isn't necessarily a good one.
This logic mostly only makes sense if your goal is primarily to grow the audience and widen the appeal, though. I think at this stage in the Go programming language's lifespan, that is no longer the goal. If anything, Go has probably started to saturate its own sweet spot in some ways and a lot of complaints reveal a difference in values more than they do opportunity for improvement.
To me, it makes sense for the Go team to focus on improving Go for the vast majority of its users over the opinions of people who don't like it that much in the first place. There's millions of lines of code written in Go and those are going to have to be maintained for many years. Of utmost priority in my mind is making Go code more correct (i.e. By adding tools that can make code more correct-by-construction or eliminate classes of errors. I didn't say concurrency safety, but... some form of correctness checking for code involving mutexes would be really nice, something like gVisor checklocks but better.)
And on that note, if I could pick something to prioritize to add to Go, it would probably be sum types with pattern matching. I don't think it is extremely likely that we will see those, since it's a massive language change that isn't exactly easy to reconcile with what's already here. (e.g. a `Result` type would naturally emerge from the existence of sum types. Maybe that's an improvement, but boy that is a non-trivial change.)
It’s fun, because when a newcomer joins a team, people tend to remind them that their bison is fresh and they might be seeing pain we got accustomed to. That’s usually said in a positive manner.
I have a similar level of experience with Go, and I would go so far as to say it is in fact one of the best features of the language.
I wouldn’t be surprised that when the pro-exception-handling crowd eventually wins, it will lead to hard forks and severe fragmentation of the entire ecosystem.
To be honest, I really don't believe that will happen in the future. All of the proposals pretty much just add syntactical sugar, and even those have failed to gain consensus.
That's just survivorship bias isn't it? The newcomers who find Go's design and/or leadership obnoxious get a job that doesn't involve doing something that they dislike.
That's okay. Not everyone needs to like Go. Pleasing every programmer on the planet is an unreasonable thing to ask for. It's also impossible because some preferences conflict.
After over a decade of people bringing up the issue in almost every single thread about Go, it's time to give the language what it deserves: no more constructive feedback, snarky dismissals only.
Not infrequently by people who are not even Go programmers. And/or the same people who hijack every other Go thread to rant about how much they hate Go.
You don't see me going around $languages_I_dislike threads slagging off the language, much less demanding features. Not saying anything is an option you know.
Eminently pragmatic solution — I like it. In Rust, a crate is a compilation unit, and the compiler has limited parallelism opportunities, especially since rustc offloads much of the work to LLVM, which is largely single-threaded.
It’s not surprising they didn’t see a linear speedup from splitting into so many crates. The compiler now produces a large number of intermediate object files that must be read back and linked into the final binary. On top of that, rustc caches a significant amount of semantic information — lifetimes, trait resolutions, type inference — much of which now has to be recomputed for each crate, including dependencies. That introduces a lot of redundant work.
I also would expect this to hurt runtime performance as it likely reduces inlining opportunities (unless LTO is really good now?)
They mention that compiling one crate at a time (-j1) doesnt give the 7x slowdown, which rules out the object file/caching-in-rustc theories... I think the only explanation is the rustcs are sharing limited L3 cache.
It would be great to know a bit more about the protocol itself in the readme. I’m left wondering if it’s reliable connection-oriented, stream or message based, etc.
I am not sure I buy the underlying idea behind this piece, that somehow a lot of money/time has been invested into asynchronous IO at the expense of thread performance (creation time, context switch time, scheduler efficiency, etc.).
First, significant work has been done in the kernel in that area simply because any gains there massively impact application performance and energy efficiency, two things the big kernel sponsors deeply care about.
Second, asynchronous IO in the kernel has actually been underinvested for years. Async disk IO did not exist at all for years until AIO came to be. And even that was a half-backed, awful API no one wanted to use except for some database people who needed it badly enough to be willing to put up with it. It's a somewhat recent development that really fast, genuinely async IO has taken center stage through io_uring and the likes of AF_XDP.
Make os thread runs more efficient is like `faking async IOs (disk/network/whatever goes out from the computer shell) into the sync operations in a more efficient way`. But why would you do it at first place if the program can handle async operations at first place? Just let userland program do their business would be a better decision though.
> Under the asynchronous model, both timeouts and cancellation simply compose. You take a future representing the work you're doing, and spawn a new future that completes after sleeping for some duration, or spawn a new future that waits on a cancel channel. Then you just race these futures. Take whichever completes first and cancel the other.
That only works when what you're trying to do has no side effect. Consider what happens when you need to cancel a write to a file or a stream. Did you write everything? Something? Nothing? What's the state of the file/stream at this point?
Unfortunately, this is intractable: you'll need the underlying system to let you know, which means you will have to wait for it to return. Therefore, if these operations should have a deadline, you'll need to be able to communicate that to the kernel.
I think the author knows very well what UB is and means. But he’s thinking critically about the whole system.
UB is meant to add value. It’s possible to write a language without it, so why do we have any UB at all? We do because of portability and because it gives flexibility to compilers writers.
The post is all about whether this flexibility is worth it when compared with the difficulty of writing programs without UB.
The author makes the case that (1) there seem to be more money lost on bugs than money saved on faster bytecode and (2) there’s an unwillingness to do something about it because compiler writers have a lot of weight when it comes to what goes into language standards.
Even stipulating that part of the argument, the author then goes on a tear about optimizations breaking constant-time evaluation, which doesn’t have anything to do with UB.
The real argument seems to be that C compilers had it right when they really did embody C as portable assembly, and everything that’s made that mapping less predictable has been a regression.
Which I think is somewhat the core of the problem. People treating things in C in ways they just are not. Weather that is C is portable assembly or C the "it's just bit's in memory" view of things (which often is double wrong ignoring stuff like hardware caching). Or stuff like writing const time code based on assuming that the compiler probably, hopefully can't figure out that it can optimize something.
> The real argument seems to be that C compilers had it right when they really did embody C as portable assembly
But why would you use such a C. Such a C would be slow compared to it's competition while still prone to problematic bugs. At the same time often people seem to forgot that part of UB is rooted in different hardware doing different things including having behavior in some cases which isn't just a register/mem address having an "arbitrary value" but more similar to C UB (like e.g. when it involves CPU caches).
> Although it strove to give programmers the opportunity to write truly portable programs, the C89 Committee did not want to force programmers into writing portably, to preclude the use of C as a “high-level assembler:” the ability to write machine-specific code is one of the strengths of C. It is this principle which largely motivates drawing the distinction between strictly conforming program and conforming program (§4).
This doesn't say that C is a high-level assembly.
It just says that the committee doesn't (at that point in time) wants to force the usage of "portable" C as a mean to prevent the usage of C as high-level assembler. But just because some people use something as high level assembler doesn't mean it is high level assembly (like I did use a spoon as a fork once, it's still a spoon).
Furthermore the fact that they explicitly mention forcing portable C with the terms "to preclude" and not "to break compatibility" or similar I think says a lot about weather or not the committee thought of C as high level assembly.
Most importantly the quote is about the process of making the first C standard which had to make sure to ease the transition from various non standardized C dialects to "standard C" and I'm pretty sure that through the history there had been C dialects/compiler implementations which approached C as high level assembly, but C as in "standard C" is not that.
That statement means the comittee does not want to stop it from being developed. The question is, has it? They mean a specific implementation could work as portable assembler, mirroring djb's request for an 'unsurprising' C compiler. Another interpretation would be in the context of CompCert, which has been developed to achieve semantic preservation between assembly and its source. Interestingly this of course hints at verifying an assembled snippet coming from some other source as well. Then that alternate source for the critical functions frees the rest of compiler internals from the problems of preserving constant-timeness and leakfreedom through their passes.
C already existed prior to the ANSI standardization process, so there was nothing "to be developed", though a few changes were made to the language, in particular function prototypes.
C was being used in this fashion, and the ANSI standards committee made it clear that it wanted the standard to maintain that use-case.
These are aspiration statements, not a factual judgment of what that standard or its existing implementations actually are. At least they do not cover all implementations nor define precisely what they cover. Note the immediate next statement: "C code can be non-portable."
In my opinion, C has tried to serve two masters and they made a screw-hammer in the process.
The rest of the field has moved on significantly. We want portable behavior, not implementation-defined vomit that will leave you doubting whether porting introduces new UB paths that you haven't already fully checked against (by, e.g. varying the size of integers in such a way some promotion is changed to something leading to signed overflow; or bounds checking is ineffective).
The paragraph further down about explicitly and swiftly rejecting a validation test suite should also read as a warning. Not only would the proposal of modern software development without a test suite get you swiftly fired today, but they're explicitly acknowledging the insurmountable difficulties in producing any code with consistent cross-implementation behavior. But in the time since then, other languages have demonstrated you can reap many of the advantages of close-to-the-metal without compromising on behavior consistency in cross-target behavior, at least for many relevant real-word cases.
They really knew what they were building, a compromise. But that gets cherry-picked into absurdity such as stating C is portable in present-tense or that any inherent properties make it assembly-like. It's neither.
These are statements of intent. And the intent is both stated explicitly and also very clear in the standard document that the use as a "portable assembler" is one of the use cases that is intended and that the language should not prohibit.
That does not mean that C is a portable assembly language to the exclusion of everything and anything else, but it also means the claim that it is definitely in no way a portable assembly language at all is also clearly false. Being a portable assembly (and "high level" for the time) is one of the intended use-cases.
> In my opinion, C has tried to serve two masters and they made a screw-hammer in the process.
Yes. The original intent for which it was designed and in which role it works well.
> The rest of the field has moved on significantly. We want portable behavior, not implementation-defined vomit that will leave you doubting whether porting introduces new UB paths that you haven't already fully checked against
Yes, that's the "other" direction that deviates from the original intent. In this role, it does not work well, because, as you rightly point out, all that UB/IB becomes a bug, not a feature.
For that role: pick another language. Because trying to retrofit C to not be the language it is just doesn't work. People have tried. And failed.
Of course what we have now is the worst of both worlds: instead of either (a) UB serving its original purpose of letting C be a fairly thin and mostly portable shell above the machine, or (b) eliminating UB in order to have stable semantics, compiler writers have chosen (c): exploiting UB for optimization.
Now these optimizations alter program behavior, sometimes drastically and even impacting safety (for example by eliminating bounds checks that the programmer explicitly put in!), despite the fact that the one cardinal rule of program optimization is that it must not alter program behavior (except for execution speed).
The completely schizophrenic "reasoning" for this altering of program behavior being somehow OK is that, at the same time that we are using UB to optimize all over the place, we are also free to assume that UB cannot and never does happen. This despite the fact that it is demonstrably untrue. After all UB is all over the C standard, and all over real world code. And used for optimization purposes, while not existing.
> They really knew what they were building, a compromise.
Exactly. And for the last 3 decades or so people have been trying unsuccessfully to unpick that compromise. And the result is awful.
The interests driving this are also pretty clear. On the one hand a few mega-corps for whom the tradeoff of making code inscrutable and unmanageable for The Rest of Us™ is completely worth it as long as it shaves off 0.02% running time in the code they run on tens or hundreds of data centers and I don't know how many machines. On the other hand, compiler researchers and/or open-source compiler engineers who are mostly financed by those few megacorps (the joy of open-source!) and for whom there is little else in terms of PhD-worthy or paid work to do outside of that constellation.
I used to pay for my C compiler, thus there was a vendor and I was their customer and they had a strong interest in not pissing me off, because they depended on me and my ilk for their livelihood. This even pre-dated the first ANSI-C standard, so all the compiler's behavior was UB. They still didn't pull any of the shenanigans that current C compilers do.
Back in 1989, when C abstract machine semantics were closer to being a portable macro processor, and stuff like the register keyword was actually something compilers cared about.
And even then there was no notion of constant-time being observable behavior to the compiler. You cannot write reliably constant-time code in C because execution time is not a property the C language includes in its model of computation.
But having a straightforward/predictable mapping to the underlying machine and its semantics is included in the C model of computation.
And that is actually not just compatible with the C "model of computation" being otherwise quite incomplete, these two properties are really just two sides of the same coin.
The whole idea of an "abstract C machine" that unambiguously and completely specifies behavior is a fiction.
> But having a straightforward/predictable mapping to the underlying machine and its semantics is included in the C model of computation.
While you can often guess what the assembly will be from looking at C code given that you're familiar with the compiler, exactly how C is to be translated into assembly isn't well-specified.
For example, you can't expect that all uses of the multiplication operator "*" results in an actual x86 mul instruction. Many users expect constant propagation, so you can write something like "2 * SOME_CONSTANT" without computing that value at runtime; there is no guarantee of this behavior, though. Also, for unsigned integers, when optimizations are turned on, many expect compilers to emit left shift instructions when multiplying by a constant power of two, but again, there's no guarantee of this. That's not to say this behavior couldn't be part of a specification, but it's just an informal expectation right now.
What I think people might want is some readable, well-defined set of attribute grammars[0] for translation of C into assembly for varying optimization levels - then, you really would be able to know exactly how some piece of C code under some context would be translated into assembly. They've already been used for writing code generator generators in compilers, but what I'm thinking is something more abstract, not as concrete as a code generation tool.
> exactly how C is to be translated into assembly isn't well-specified.
Exactly! It's not well-specified so the implementation is not prevented from doing a straightforward mapping to the machine by some part of the spec that doesn't map well to the actual machine.
> But having a straightforward/predictable mapping to the underlying machine and its semantics is included in the C model of computation.
not rally, or at least not in a way which would count as "high level assembler". If it would the majority of optimizations compilers do today would not be standard conform.
Like there is a mapping to behavior but not a mapping to assembly.
Which is where the abstract C machine as a hypothetical machine formed from the rules of the standard comes in. Kinda as a mind model which runs the behavior mappings instead of running any specific assembly. But then it not being ambiguous and complete doesn't change anything about C not being high level assembly, actually it makes C even less high level assembly.
So you can easily tell, just by looking to the C source code, if plain Assembly instructions are being used from four books of ISA manual, if the compiler is able to automatically vectorize a code region including which flavour of vector instructions, or completely replace specific math code patterns for a single opcode.
Nobody says that implementation-defined behavior must be sane or safe. The crux of the issue is that a compiler can assume that UB never happens, while IB is allowed to. Does anyone have an example where the assumption that UB never happens actually makes the program faster and better, compared to UB==IB?
The issue is that you’d have to come up with and agree on an alternative language specification without (or with less) UB. Having the compiler implementation be the specification is not a solution. And such a newly agreed specification would invariably either turn some previously conforming programs nonconforming, or reduce performance in relevant scenarios, or both.
That’s not to say that it wouldn’t be worth it, but given the multitude of compiler implementations and vendors, and the huge amount of existing code, it’s a difficult proposition.
What traditionally has been done, is either to define some “safe” subset of C verified by linters, or since you probably want to break some compatibility anyway, design a separate new language.
> UB is meant to add value. It’s possible to write a language without it, so why do we have any UB at all? We do because of portability and because it gives flexibility to compilers writers.
Implementation-defined behavior is here for portability for valid code. Undefined behavior is here so that compilers have leeway with handling invalid conditions (like null pointer dereference, out-of-bounds access, integer overflows, division by zero ...).
What does it mean that a language does not have UBs? There are several cases how to handle invalid conditions:
1) eliminate them at compile time - this is optimal, but currently practical just for some classes of errors.
2) have consistent, well-defined behavior for them - platforms may have vastly different way how to handle invalid conditions
3) have consistent, implementation-defined behavior for them - usable for some classes of errors (integer overflow, division by zero), but for others it would add extensive runtime overhead.
We are on our way there.
reply