Not really. By default allocators will panic if there isn't physical memory available. Recursive functions can cause panic at certain depth. Code generated by macros isn't very visible for the developers and recursive macros are very common. Return types are checked only if the developer adds #[must_use].
You can overcome lot if you invest a lot for type system, but that depends on the developer.
As I recall, BeOS was asking on the order of $80 million, NeXT was acquired for $400 million.
I found this reference, so 80 valuation, Be wanted upwards of 200, “In 1996, Apple Computer decided to abandon Copland, the project to rewrite and modernize the Macintosh operating system. BeOS had many of the features Apple sought, and around Christmas time they offered to buy Be for $120 million, later raising their bid to $200 million. However, despite estimates of Be's total worth at approximately $80 million,[citation needed] Gassée held out for $275 million, and Apple balked. In a surprise move, Apple went on to purchase NeXT, the company their former co-founder Steve Jobs had earlier left Apple to found, for $429 million, with the high price justified by Apple getting Jobs and his NeXT engineers in tow. NeXTSTEP was used as the basis for their new operating system, Mac OS X.”
I don’t remember the exact number, but BeOS was too incomplete at the time to spend what they were asking, and maybe to purchase at all. There was no way to print documents, which still mattered a lot for a desktop OS in 1996. It needed a lot of work.
Now, in retrospect, Apple had time; Mac OS X wasn’t ready for the mainstream until 2003-2004.
Send PostScript, done. Today it's figured out what driver will properly rasterize exotic things like ligatures because we decided that throwing a real CPU in the printer was a mistake.
Unless you were using anything from that tiny obscure Hewlett Packard operation and didn’t want to shell out for a module. HP never promoted Postscript. It was far from universal as an embedded PDL.
> that throwing a real CPU in the printer was a mistake.
The CPU in any decently modern printer is still many times more powerful than what was in an original LaserWriter (30ppm and up needs power, even if it’s simple transforms and not running wankery). It’s not just about CPU power and modern laser printers still support PDL and vector languages like PCL and PDF (and many have some half assed often buggy PS “compatibility” eg BRScript), the bigger mistake is using general purpose Turing tarpit that is “powerful” rather than a true high level built for purpose PDL. PostScript just isn’t very good and was always a hack.
> Send PostScript, done.
The other problem of course being that raw PostScript as a target for most applications is not at all elegant and ironically too low level. So even if you wanted postscript, an OS that didn’t provide something more useful to most applications was missing core functionality.
The jwz quote about regexes applies just as well.
> Unless you were using anything from that tiny obscure Hewlett Packard operation and didn’t want to shell out for a module. HP never promoted Postscript. It was far from universal as an embedded PDL.
it's why HP had a series of printers marketed explicitly for Macintosh use, whose difference from the otherwise same model was that PostScript interpreter module was included as standard, as Mac didn't really support non-postscript printers with anything resembling usability
There was a time when the fastest 68k processor Apple shipped was in the LaserWriter (12MHz instead of 8Mhz in the Mac).
I seem to recall a story of someone internal to Apple figuring out how to run a compiler or other batch processing system on the LaserWriter as a faster quasi-coprocessor attached to a Mac.
I remember that time. I was taking a graduate level intro to graphics class and we had an assignment to write a ray-tracer and turn in a printout of the final image along with a printout of the source code. The instructor allowed any programming language, so I used a different one for each assignment.
For the ray tracing assignment I used postscript, the PS image operator calls a function to return each sample in the image. The transform matrix made scaling the image easy.
My code was two pages long, up from one page because of the many comments. I think the next shortest was 15 pages. It also ran faster than most others because of the faster processor.
Don Lancaster (outside of Apple) did that. In fact, he ignored the Mac and connected a LaserWriter directly to his Apple II, and programmed in straight PostScript. Used that language the rest of his life. All the PDFs on his site were hand-crafted.
Oh I knew that was coming. This interesting but ancient piece of trivia just illustrates something about how slow micros were back then. It’s not like printer don’t have more and multiple CPUs today. Not like whatever poorly written outsourced to India “managed” shit and other features are going to run on a potato. Whatever is driving the color touch LCD on even the Walmart econoshit is many times more powerful then that 12 MHz 68k.
Still have no idea what the GPs point was. You can just as easily run a raster on the host, if it has bugs it has bugs, where it lives doesn’t matter.
Further rosetinting is of course that LaserWriter was $20k and it’d be a decade plus before a monochrome dropped under 1. I’m gonna guess the Canon with the shitty drivers is 10x cheaper and faster.
It really isn't that much though. A 1200x1200 DPI monochrome image on Letter size (not even considering margins) paper is on the order of 16 MiB uncompressed. And bitmaps of text and line art compress down heavily (and you can use a bitmap atlas or prerendered bitmap font technique as well).
It’s also usually easier to upgrade RAM in a printer than a crappy firmware.
> most printers still render fonts and such internally.
Many printers have some scalable font rendering capability, but it is often not usable in practice for high fidelity. You absolutely can raster on the host to either a bitmap font, or make use of the PDL's native compression. Most lower end printers (which is pretty much the bulk of what is sold) do not have the capability to render arbitrary TrueType fonts, for instance. A consumer/SOHO level Canon laser using UFRII is going to rely on the host for rastering arbitrary fonts.
I have a modern Canon laser printer that does not properly implement ligatures because of obscure driver issues. What I see on the screen is not what is printed.
Text layout is hard and unfortunately drivers and firmware are often buggy (and as printing is lower and lower margin that doesn’t get better). But just throwing a weird language engine in doesn’t actually solve any of those problems.
Text layout doesn't need to be done when the source is a PDF. Make printers do the PDF and let Adobe control trademark access via conformity tests and life is good.
The biggest errors I’ve found are when the PDF says it’s using FontX or whatever, the printer claims to support the same version, and it’s subtly different.
The PDF tool assumes it doesn’t have to send the full font, and the document garbles. Print as image sometimes gets around this.
> Text layout doesn't need to be done when the source is a PDF.
PDF isn’t entirely a panacea, since it’s complex enough that printing any random PDF isn’t trivial at all, but sure, close enough, but before you were talking about Postscript.
> Make printers do the PDF and let Adobe control trademark access via conformity tests and life is good.
PDF printers aren’t all that uncommon. So why doesn’t your Canon do this? These aren’t technical issues at all. This is an economic/financial problem as mentioned (doing business with Adobe!). This isn’t about part cost, a CPU 100x more powerful than the one in the LaserWriter is nothing.
Apple didn't support anything other than PostScript natively at the time, so their printers came with postscript support. HP made special models for use with Macs that shipped with PostScript included.
The high price was also justified by the success of WebObjects at the time, which was seen as a potential fuel for an IPO for NeXT, even though WebObjects was not what Apple was buying it for. This Computer History Museum article goes into that angle in detail: https://computerhistory.org/blog/next-steve-jobs-dot-com-ipo...
The primary reason to legalize isn’t to make it easier to do drugs, it’s to not use the justice and court system for dealing with addiction problems.
Our goal should be to legalize use and then take the money saved from police enforcement and funnel that into programs that get people off drugs. In the US an issue is that the latter part is part of the healthcare system, and we all know that has a lot of issues in serving people who fall into the under-employed category.
When this happens the reason 90% of the time is usually not because the program wasn’t working but the opposition to the program has made sure to either gut the funding or put in measures that makes those programs not work (only hiring 2 people to handle all the work or excessive operating requirements.
Cops will fight tooth and nail against social programs because it reduces their budget when problems are solved.
Look up these programs and you will see centrists claiming the progressive program was bad, but never indicate reasons as to why.
In Portland, decriminalization was poorly planned, new treatment options were implemented badly, and the alternative penalties for possession were not meaningfully enforced. It was a failure of execution.
I don’t think it tells us much about how well an ideally functioning decriminalization or legalization effort would work. It does update us in understanding that it’s difficult to accomplish this transition successfully.
Absolutely, Americans love saying “we’ll just send the cops after them.” Because then they don’t have to do any of the hard work of understanding or funding the programs. Americans are lazy when it comes to solving actual hard problems.
In numerous places those efforts have been purposefully sabotaged by police who aren't happy about the loss of court revenues and the eventual cutbacks on police funding for drug prohibition. With them literally refusing to enforce some laws like public intoxication or shooting up heroin in the middle of the street because their more profitable and super easy to get arrests for drug possession laws no longer existed.
Not my area of expertise per se, but the counterargument that I've seen is that the states (e.g. Oregon) that tried it never got the backstops in place to help soften and support the transition (i.e. rehab centers, support programs, social programs). Instead, it was just a hard switch that went expectedly bad.
There's at least a theory that people believe will work that hasn't been correctly implemented yet, but whether or not it's feasible to implement at all, I'm not holding my breath.
We really don’t know that, they had terrible data reporting on drug use before the policy was implemented so we can’t even make a before/after comparison. We also can’t parse out the extent to which changes in drug use stats reflect changes in autopsies or in cultural attitudes and candor about drug use affecting self reports.
More specifically, it was active counter intelligence where the US sent a false report of a water issue on Midway broadcast in the clear that they then picked up the Japanese report of the issue. They used that to discern which codeword Japan used for Midway.
You're right, but it was still scant information with which to bet the fleet on. The Japanese might have suspected that their code was broken, and so used disinformation to mislead the US Navy.
Hell, it's what I would have done whether I thought the code was broken or not.
The Germans had plenty of evidence that Enigma was broken. The High Command refused to believe it. I would have used the broken Enigma to send the Allies into a trap.
The way to play the code breaking game is to assume the enemy has broken it, and act accordingly to your own advantage.
Even if you know one of your widely-used codes or cyphers has been broken, I don't think it is that easy to make use of that fact, except perhaps briefly and in a limited way.
To conceal the fact that you know that it is broken, you would need to maintain use of that code at similar levels as before, without approximately doubling the signal traffic by sending the real communication under a new code. Furthermore, the fake traffic under the original code must be realistic to the degree the enemy can verify it, as they can read it, and if a major code has been broken for a period of a few weeks or so, the enemy presumably has plenty of information to use in verifying new messages, at least for a while (the verification need not be explicitly performed, at first; if new messages seem to be inconsistent with what is already known, questions are likely to be raised.)
Compromised minor cyphers and codes are another matter, and that is exactly how the Midway ruse worked.
For Nazi Germany the "fake traffic" would not be needed for all the services. Key change happened at midnight Berlin time by all operators. The radio operators stayed up late into the night sending the personal correspondence of the various officers to their families. The codebreaking process used this huge volume of messages to feed into the "cribbing" process which aided in recovering the traffic. By the time they had extracted enough of the key to decrypt traffic, normal military communications had started
Correction: I wrote ‘without approximately doubling…’ where I meant ‘while approximately doubling…’ - and then one must take into account sidewndr46’s interesting point.
How do you keep your allies from believing your fake encoded messages and taking the same action that they would have taken, had you not suspected the code was broken?
There was a lot of this. The Enigma cracking team would use things like weather reports and convoy sightings as known plaintext for their work. If you pick up a submarine transmitting near a convoy, it’s probably saying that it saw a convoy at such and such coordinates. The same key was reused for the other messages from that day so cracking one let you read them all.
If I was running it and transmitting coordinates, I'd give the U-Boot commanders one-time pads to obfuscate them, and then encrypt the entire message.
> The same key was reused for the other messages from that day so cracking one let you read them all.
I know. The Germans were simply idiots in their hubris about Enigma. The evidence it was cracked was overwhelming, but Doenitz just dismissed it all. Rommel was also defeated by decoded Enigma messages, and he dismissed all evidence of its subversion.
The weird thing is that it's possible the Enigma could have been used during WWII for communications. With the right number of rotors, the right choice of keys, and the right key rotation schedule it is possible.
Of course when you believe you are in the most advanced nation in the US, what is the incentive to improve?
I wonder how hard it would have been to provision each submarine with enough one-time pad to cover all secure communication for their entire patrol. I don’t know how much radio traffic there was, but it seems like that would not have been a major burden.
Hand them a newspaper. Plenty of text there to use as a pad. The code breakers would need to know both the newspaper used, and the algorithm used to select letters from the paper in order to crack it. Or use a magazine to provide the algorithm.
Every U-Boot mission gets another newspaper. Every U-Boot has a different newspaper. There weren't that many U-Boots, so this would be manageable.
Even decoding one U-Boot's transmissions would not compromise the others.
The U-Boats have plenty of other operational failures. Whenever U-Boats would be potentially damaged by the Allies, they'd put a notice in at one of the British listening stations. When the U-boat came into port, it'd radio in advance that it was damaged and might need special provisions (couldn't steer well, etc.) or even a tug to make it into the harbor. This all happened in German on the HF radio, which propagates really well. At least to Britain and possibly all the way to the mainland US.
The Allies basically got free reports of exact damage they inflicted on subs this way. On the other hand if a report didn't show up in a few days, they probably sunk it.
This debate about regulations is alway interesting. There are regulations which help protect the environment, like not being allowed to dump dangerous chemicals into your local stream or river.
Then there are regulations like these which are aimed at protecting the investment companies have made into infrastructure, effectively granting them a monopoly.
When people debate this, they often are thinking of the first class of protective regulations that are too onerous on companies, but I think most people like clean drinking water and rivers that no longer catch fire.
Whereas the second class of protection is really harmful to the consumer, and the powers-that-be have effectively been given a monopoly, and with that the money and power to protect their place in the market through continued influence on elections and other things to maintain these rent seeking businesses. We all hate the latter, but these companies have a lot of sway over politicians.
And from the article, the telecom industry receives billions in corporate welfare. A common argument against cutting it off is that telecom is capital intensive infrastructure, and if you cut their govbux you're blocking poor people from being able to communicate, we all deserve the right to communicate. But if that's your take, how can you also hate the protectionist laws? Telecom are given a monopoly because it doesn't make sense to, say, have N sets of telephone poles or power lines from each provider.
In some countries there is sometimes a cable & data connection owner and then a separate service provider. Laws regulate that the cable provider must let other companies provide connections to customers over their cables. The service provider pays the cable owner for a bulk of data that its customers use. The cable owner can't charge more than it charges itself when it acts as a service provider.
Not sure if that made sense! I pay company B but my fibre connection is provided by Company A. If I want to change to Company C I start a contract with C and the only thing I change is the cable modem.
// Good? for walrus in walruses { walrus.frobnicate() }
Is essentially equivalent to
// BAD for walrus in walruses { frobnicate(walrus) }
And this is good,
// GOOD frobnicate_batch(walruses)
So should the first one really be something more like
// impl FrobicateAll for &[Walrus] walruses.frobicate_all()