Is all UB silly? E.g., wouldn't fully defining what happens when one goes beyond the end of an array impose a non-trivial performance hit for at least some code?
Yes. But there's middle ground between fully-defined behavior (lots of slow checks) and what current compiler-writers think UB is (do whatever I want).
Specifically, implement UB the way it is described in the standard: pretend it isn't UB, do it anyway, consequences be damned. That's what "ignore the situation with unpredictable results" actually means.
The current standard is _very_ explicit that undefined behavior is indeed undefined, i.e. "do whatever you want".
> pretend it isn't UB, do it anyway, consequences be damned.
This explicitly isn't a requirement, but even if it were, "ignoring the situation completely with unpredictable results" can be interpreted in numerous ways. One of these ways is "ignoring any cases in which UB is encountered" which is exactly what compilers are doing. Then again, saying "the compiler didn't ignore the situation and as a consequence I got results I didn't predict" isn't a strong argument when the standard specifically told you that you will get unpredictable results.
Which part of the standard is ignored? Again, the standard is _very_ explicit about what undefined behavior means. If you don't like that you can either try to change the standard or use the numerous command line options provided by most compilers to tell your compiler that you would like certain undefined behaviors have a defined meaning.
Saying that compilers shouldn't ignore code with undefined behavior is like saying compilers shouldn't ignore the body of an if-statement just because the condition evaluated to false.
You're right on one point: the standard is very explicit.
And because it is explicit—a fact you yourself just admitted—the fact that silent erasure of non-dead code is not a listed option in response to UB means that it is not allowed.
Reasonable people can disagree as to whether that interpretation is valid.
No reasonable person can say that it is explicit. It simply, factually, is not. At no point in any version of the C Standard does the text "implementations can do whatever they want" appear.
I have no time for blatant and insulting dishonesty. We're done here.
behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements"
It is _very_ explicit. The following note is (as all notes are) not normative. So even if the note would cast any doubt (it really doesn't), it can safely be ignored.
Seems that ideally, there would be a standard based on viral inactivation/removal rate, rather than targeting specifically ACH. Companies and builders could then decide what combination of filtration, ventilation, and UV deactivation they will use to achieve this rate.
Also, I couldn't find a video, but at least in my Asian family, we will sometimes cut food like, say, an eggroll by using the chopsticks as a scissors.
Lastly, a chopsticks hack is that they are great way to eat food like buttered popcorn without getting your fingers greasy. :-)
If you use university resources, they almost certainly will have a claim to the IP. Best to just buy a personal laptop, and use only personal resources.
Universities are governed by masses of by-laws. Even if you read, understand and abide by all of them, you cannot confidently protect yourself against various accusations, e.g. plagiarism, academic misconduct, etc. You could even jeopardize your PhD.
The only reasonably "safe" approach is to buy your own PC, only work on your idea off campus and not publish, publicize or otherwise promote your work until after you graduate with your PhD. Keep a diary of your actual work, etc to be able to demonstrate that you did not utilize any of the universities resources for your project and that the essential IP is unrelated to your PhD research work.
I earned my PhD a few years ago and along the way I saw some of my colleagues get tripped up by the most specious academic rules. Cutting corners is not worth it.
I'm a professor at a R1 university in the CS dept. I wire-wrapped a PDP-8 in school as part of a CS degree, and thought it was super-interesting and fun, and part of me agrees with you.
But the reality is that you can only cram so much into a 4-year degree and wire-wrapping a 68000 seems like it would take many hours. I already feel like there is so much that we are leaving out. For example, our undergrads don't implement a compiler as part of their degree.
*EDIT: Also, it's arguably more computer engineering than computer science, but the my main point is that the undergrad CS curriculum is already super-crowded.
The mention of no compiler I'm surprised at. We worked with a microcode simulator for low level CPU poking examples, partially implemented a pre-written compiler as part of our Computer Science course, likewise wrote a DNS server (these are parts just from memory I as an undergrad found challenging).
We also learnt more useless things with databases like aligning writes with HDD sectors for performance (which at the time I, and others, rolled our eyes at since we knew we'd never need it, although not because of SSDs, but how many people write a database?).
This was a 3 year course at a "red brick" university ~1997-2000 in the UK but by no means one of the best for technology - e.g. the lead of the department, and by extension those under him, refused to teach design patterns (or enterprise patterns).
When I asked why of 2 professors and laid out (what I thought was) factual grounding I was told because patterns are for Java or C++ and language specific (which as you probably know is BS, only implementations are, or patterns that work around a language deficit). I later learned they took this as personal criticism, instead of course criticism.
Offtopic:
I'm still salty 25 years later about being given bad grades for things such as that (i.e. first and 2:1 grades for some coursework, thirds, passes and fails in others usually those I happened to argue in even though marking was meant to be anonymous).
I now have an illustrious career in IT, open source and competitive coding (having won my fair share).
Tldr; I learnt don't argue with people who grade you in a polarised institution until you get the qualification. I wish I could have told myself that at 19.
We have had a compiler elective before, but it was never required, and currently we don't offer it. I don't think there is a lot of demand from students for it, for better or worse. We also don't do networking fundamentals as a requirement, like sliding window protocol, CSMA/CD, how routing works, etc., but we do have an elective that I believe covers these things.
We do require architecture, and still even do Karnaugh maps. I do believe that every CS person should have a fundamental understanding of cache, instruction fetch, decode, MESI, etc., but probably don't need a semester's worth of architecture. If I had my druthers, I would consolidate a number of separate courses into maybe a 2-semester sequence that would basically be: "What every computer scientist should know", and basically cover the coolest and most seminal topics from different areas of CS.
That's interesting and somewhat surprising. I'm not knowledgeable about battery design by any means, but I would have thought that there would be a better way to make a battery pack for a car than connecting thousands of small batteries together.
if I remember my basic chemistry, batteries don't deliver voltages at the level of 10/20/100v directly often, its more commonly 1/2v or 0.5v class voltages. You have to have a much more 'aggressive' chemical reaction to deliver higher voltages. And, the same with current: a single surface between two reacting things delivers less current. Its a function of surface area. Same with capacitance: you sometimes need 'more' surface to big up the effect.
Therefore all you have is stacking it up. parallel or serial, thats what there is to get higher voltages, more current draw, longer life per-cell.
Inside a lead acid battery its multiple surfaces, sub-cells. It's normal. inside almost any domestic battery I suspect its sub-cells, sub-cells all the way down.
A giant roll of surface, to increase the area in contact might be one way of getting "more" in terms of current draw or lifetime. I bet that its voltage remains close to the constant in this, hence Tesla "stacking" up the rolled cells, to boost voltage.
The Nissan Leaf uses larger cells [1], each roughly the size of a ream of printer paper. So there are real car designers who agree larger batteries are worth considering.
Of course, the Leaf makes a bunch of other decisions that are different to Tesla - lower price point, smaller battery/reduced range, air-cooling batteries instead of water-cooling, a (now abandoned) battery lease scheme, and suchlike.
Using standard form factors and manufacturing techniques made it much easier for Tesla to get batteries off the ground through their partnership with Panasonic. The extra space left by the gaps between cells also has the advantage of being ideal for cooling (battery performance and safety is correlated to temperature).
This strategy is one of the remarked upon things when I first heard of Tesla (something like “this California startup is powering their electric car with laptop batteries”) ironically laptops have almost all transitioned to lithium polymer (pouch cells) instead of the 18650s they used back then. Not all car manufacturers use teslas standardized cell technique, as it does have some downsides. I guess time will tell, but I doubt Tesla will abandon this technique any time soon.
Separating the cells allows makes it easier to cool them. It also provides more inert metal between them in case of fire.
A certain amount of stacking is necessary to get up to a decent voltage, as others have pointed out. But even "100 brick-sized cells" would be a more dangerous prospect than "thousands of 18650 cells".
Kind of depends on what we mean when we use the word "Internet". If we mean internet as a social/economic revolution, then sure, BBSs were relevant. If we mean Internet as a set of protocols, software that implements them, and hardware to run them on, then I don't see that BBSs had a lot of relevance (though they used sliding window protocols for file transfer such as ZMODEM/YMODEM).
There was a brief period where BBSs also offered various gateway services into the Internet a long with the normal BBS services like doors. I received my first ever email address from a huge local BBS back in the early early 90s by paying a token fee every month. I had nobody I knew with email addresses until a few years later so it was kind of pointless, but it did work.
On a larger scale, when AOL started offering internet services through their network is sort of the same thing until they eventually just become an ISP.
Big networks were gated in and out of the Internet, such as UUCP and BITNET. (UUCP for Unix systems, BITNET for IBM mainframes.) This had interesting effects on email addresses:
Is that email address even real? I can understand some of it but not all. Looks like UUCP to research, UUCP to ucbvax, somehow send to cmu-cs-pt.arpa, idk what's up with CMU-ITC-LINUS, email to dave%CMU-ITC-LINUS@CMU-CS-PT, which forwards to dave@CMU-ITC-LINUS
That's really interesting! Just to be clear in my post I'm referring to bog standard email addresses not fidonet...which was also very common at the time.
I'm not aware of any fidonet<--> internet bridges during that time period but I suppose their could have been. AFAIK fido relays all occurred via POTS sync calls between BBSs
Agree. You could use the argument that argument against almost any teaching example. Like, when's the last time you ever needed to integrate x from 0 to 1? I've never needed to do that. Or when is the last time you needed to know how long a 1 kg ball will take to fall from 1 m? At least in my opinion, the way the author thinks recursion should be taught seems to needlessly complicate it.
That seems unlikely to happen in the near to medium term. For that to happen, everything would have to be rewritten using a quantum algorithm and language, and run on quantum hardware. Imagine writing a web browser in a quantum language, within a quantum computing software ecosystem. It's hard to see how that would have any benefit.
If you are talking 100 years out, though, who knows?
For languages like C, C++, and Rust, the bottleneck is going to mainly be system calls. With a big buffer, on an old machine, I get about 1.5 GiB/s with C++. Writing 1 char at a time, I get less than 1 MiB/s.
#include <cstddef>
#include <random>
#include <chrono>
#include <cassert>
#include <array>
#include <cstdio>
#include <unistd.h>
#include <cstring>
#include <cstdlib>
int main(int argc, char **argv) {
int rv;
assert(argc == 3);
const unsigned int n = std::atoi(argv[1]);
char *buf = new char[n];
std::memset(buf, '1', n);
const unsigned int k = std::atoi(argv[2]);
auto start = std::chrono::high_resolution_clock::now();
for (size_t i = 0; i < k; i++) {
rv = write(1, buf, n);
assert(rv == int(n));
}
auto stop = std::chrono::high_resolution_clock::now();
auto duration = stop - start;
std::chrono::duration<double> secs = duration;
std::fprintf(stderr, "buffer size: %d, num syscalls: %d, perf:%f MiB/s\n", n, k, (double(n)*k)/(1024*1024)/secs.count());
}
EDIT: Also note that a big write to a pipe (bigger than PIPE_BUF) may require multiple syscalls on the read side.
EDIT 2: Also, it appears that the kernel is smart enough to not copy anything when it's clear that there is no need. When I don't go through cat, I get rates that are well above memory bandwidth, implying that it's not doing any actual work:
I suspect (but am not sure) that the shell may be doing something clever for a stream redirection (>) and giving your program a STDOUT file descriptor directly to /dev/null.
I may be wrong, though. Check with lsof or similar.
There's no special "no work" detection needed. a.out is calling the write function for the null device, which just returns without doing anything. No pipes are involved.