Pushing out new versions of your site. You can’t have the new assets on half the nodes that are serving your site otherwise your site goes down while things slowly propagate.
> Why is this the case? I don't have too much knowledge of CDN architecture so I am curious
Fastly is not really a regular CDN. It is a fully programmable edge cache with cache control algorithms decided and controlled by the customer running at the edges. You can think of Fastly configuration as a part of your code base where it is for you to decide if you want to perform the action on the edge on a per-request basis rather than on the origin per cached request basis.
That in turn means that if you do deploy to your API/web 50 times a day, you would are likely to deploy your Fastly configurations about the same number of times
Zero trust seems to be very unrelated to this issue. The issue seems to have been a poison config breaking fastly stack. Zero trust is about verifying authentication of devices/users. Unrelated things really.
One of Cloudflare's top engineers previously wrote in this forum,
> This incident emphasizes the importance of the Zero Trust model that Cloudflare follows and provides to customers, which ensures that if any one system or vendor is compromised, it does not compromise the entire organization. [1]
Authentication is a part of a zero-trust model, not the whole thing.
> No single specific technology is associated with zero trust architecture; it is a holistic approach to network security that incorporates several different principles and technologies. [2]
They were referring to a completely different incident, involving compromised authentication to a camera system. I’d love to hear an explanation of how a zero-trust model would apply to this situation with Fastly. Seems like it would have to apply to a lot of multi-tenant resource exhaustion issues since we know so little about the specifics on the Fastly incident.
A blog post concerning how customer configurations cannot bring down other customers' sites would be great to see from Cloudflare. Fastly does not seem in a position to say that about its own stack and I don't expect another company to know their stack that well.
I wish they'd rather use python or Julia. GNU Octave is great. But as a clone of Matlab it's constraint by compatibility and doomed to always be second* and at worst an enabler for Matlab. I'd rather break the dominance that Matlab has in many fields and replace it with free software.
* There are a few aspects where it is better. For example I believe the variable editor introduced one or two releases ago is nicer as it allows inline evaluation.
> rather use python or Julia .. break the dominance that Matlab has in many fields and replace it with free software
I still think Octave has a niche to fill for students and HPC where software engineering (package management, compilation etc) is of insufficient concern.
Yep. Having each front end needing to scale with the overall size of the front end sounds is obviously going to hit some scaling limit. It's not clear to me from the summary why they are doing that. If it's for the shard-map or cache? Maybe if the front end is stateful that's a way to do stickiness? Seems we can only guess.
From the summary I don't understand why front end servers need to talk to each other ("continuous processing of messages from other Kinesis front-end servers"). It sounds like this is part of building the shard map or the cache. Well in the end an unfortunate design decision. #hugops for the team handling this. Cascading failures are the worst.
More like hardware from 20 years. Yeah I hope the TI monopoly gets broken. Numworks looks exactly like the calculator I was dreaming about when I was in uni.
Imagine a future where whenever the energy price drops in a region suddenly companies show up with a bunch of such tubes drop them in the ocean. Do some batch processing and once the energy price rises they pull it out and move on.
In reality of course connectivity would be a major problem and energy price differences are probably not large enough to make it viable.
Although it doesn't have anything to do with computational load, hydroelectric storage exploits the change in electricity price to store energy in hydroelectric systems. At low demand they pump water into the reservoir, and at peak demand they release it again through turbines, generate electricity, and sell it at the higher price.
Powering down some facilities during temporary spikes would make sense (someday if hardware is cheap enough, operating only for peak solar generation/minimum local demand could make sense). Being able to relocate due to cost/legal/etc. over a few month period could make sense, or the special tax and planning/regulatory treatment a "ship" would get vs. a building on land.
It uses SYMBOLP instead of NUMBERP. Highlighting the risk of introducing new bugs.
Which leads to the question, why Remacs can't just auto-wrap the lisp::* functions, which I assume are the Rust versions of the C Macros. If you look at Emacs' C code there are a lot of functions and macros that implement Elisp primitives in a C way. E.g., NUMBERP(x) will return 1 if x is a number or else 0. So you can use this function to deal with lisp objects in C code. The function that exports this primitive to elisp is Fnumberp. Rust has a better type system than C and supports meta-programming. So why not have a simple wrapper that can take a (LispObject) -> bool and turn it into a (LispObject) -> LispObject. Similar for other Elisp<->Rust types.
However unless GNU Emacs is willing to accept the Rust replacement code I don't think this will succeed. It is a lot of work and it takes quite some time until it actually pays off. It seems simpler to do what GCC and GDB have done and switch from C to (a strict subset of) C++ to simplify at least some of the more painful C hackeries.
And the reasoning given in the announcement are rather weak:
* "We can leverage the rapidly-growing crate ecosystem." Emacs recently added module support allowing leveraging all kinds of ecosystems
* "We can drop support legacy compilers and platforms (looking at you, MS-DOS)." how is that an opportunity when it effectively removes support for platforms.
> GCC and GDB have ... switch[ed] from C to (a strict subset of) C++ to simplify at least some of the more painful C hackeries.
Interesting! Where can I read more about this? Is it true that gcc compiles using g++? That's rather amusing. :)
> "We can drop support legacy compilers and platforms (looking at you, MS-DOS)."
> how is that an opportunity when it effectively removes support for platforms.
Supporting MS-DOS is something of a challenge now. I've not done any research but I have vague notions that GCC ports aren't really being maintained anymore. Besides that, many platforms (for example BeOS) are stuck on GCC 2.95.3, a rather famous last version using a particular ABI. A lot of stuff is stuck on that GCC version. (I don't know many details, although I'm very interested to learn more if anyone else has any insight.)
With the above said, I do disagree with wholesale willingness to summarily sweep legacy platforms off the table. Rust has saved itself some maintenance nightmares because it doesn't support DOS, but it means quite a large number of people are still stuck on C for industrial control system tooling. (Granted, I can't deny that I'm talking about a really tiny niche here...)
C++ builds can be made bearable if you don't go crazy with template meta-programming, do forward declarations, only expose actually public data on the headers, instantiate most use templates.
But specially do use binary libraries, even across application modules. Don't spend endless time recompiling code that doesn't change.
> C++ builds can be made bearable if you don't go crazy with template meta-programming, do forward declarations, only expose actually public data on the headers, instantiate most use templates.
Exactly. E.g. QT based applications (which rely on object orientation, inheritance and virtual methods) compile reasonably fast. However the template oriented style (e.g. found in boost libraries) can increase compile times A LOT. I had more than factor 10 for one of my libraries that I ported from having boost as a dependency to QT. Although the template oriented style might really work faster at runtime, because there's less need for virtual function calls and other indirections.
Before you invest time learning C++, have you considered Rust? There are pros and cons for and against each language, but as they're both operating in the same space it's something to keep in mind.
If you use C++ and believe Rust has the potential to reach the same marketshare / level of adoption C++ has now, then you should strongly consider learning Rust in the future. It has a compelling list of safety features and a more modern and ergonomic design.
I personally think Rust is going to go very far in the not too distant future, but I'm a slightly biased Rust fanboy. Do your own homework on the matter. You might decide your next language is Rust.
As I said on the top-level comment that opened this thread, I'm tentatively considering Rust. I'd love for it to take off and become a serious and reliable contender. I have a tendency to be conservative and cautious (generally), so I'm seeing how things pan out.
I guess it depends on what you're used to. Each time I switch from C to C++, compile times take some adjusting.
If you're not careful about how you organise your C++ code (be careful what you put in headers, don't go crazy with templates), compiling can get really slow, even for relatively small projects.
Just #include <iostream> and the compiler needs to process 37,799(!) lines and more than a MB of data:
I'd edit this into the original comment if I could; here seems to be a good spot to put it.
I think I should add a bit of context about what I mean by slow compile times: I feel comfortable if I can hit CTRL+S* and have my code rerun or restart within about half a second. At one second, it should either be executing or have output something and finished, and if it takes more than about 10 seconds to get to whatever point I'm prototyping or making a decision on, I get jittery.
It's partly because I have a ridiculously short feedback loop, but also because I am (for some reason) sensitive to interactivity latency and I prioritize responsiveness extremely highly. So long compile times tend to lean on those buttons and make me uncomfortable.
With this in mind, the only C++ experience I've had yet was doing some modifications to the Dillo web browser, which was the main web browser I was running on the 800MHz AMD Duron box I used between 2012-2014 (the same one I described in https://news.ycombinator.com/item?id=13344332).
It was a lot of fun - I modified the tab bar so tabs could be dragged around, moved between windows, etc :D - but the compile times were around 3 seconds! This was just for the test harness I built which had an absolute minimum of code in it; I was also using GCC precompiled headers for literally everything (my prototype code was a ~300 line .cpp file, no headers or anything). This was a really big adjustment for someone used to tcc (~10ms compile time) or gcc (~300ms compile time) who felt that gcc took juuuust a bit too long to do its thing.
I've seen C and C++ code compile on friends' i3 boxes (haven't seen what an i7 can do yet) and been amazed. I think when I have a machine like that I'll probably be able to seriously get into the language. (I actually have an i3 here but it had a memory failure several months ago, hopefully I can get some new DIMMs for it soon and see if the motherboard actually still works.)
--
* About ^S - I'm yet to play with Emacs which I understand has two keystrokes to save files; I mostly use Vim because it's installed everywhere. I far prefer ^S over ESC (Press)Shift ; (Release)Shift w Return i/a.
With the failure of Lisp Machines, Lucid pivoted into doing a Lisp like interactive environment for C++, with compilation at function/method level. There is an old VHS video that some uploaded to YouTube demoing it. Search for Lucid C++.
The last version of Visual C++ (v4) used a database repository as binary representation of C++ code, also offering an interactive experience.
The problem with both environments was that they required powerful hardware that most people weren't wiling to pay for and consequently died.
Microsoft is now having a smarter linker with db support for VSC++ 2017.
Both Apple and Google were discussing at last LLVM conference how to add similar capabilities, IDE friendly, to clang, partially based on the Swift Playgrounds development experience.
I think I vaguely recall learning about database-incorporating C++ compilers recently, and now I actually see them. This is really neat.
I'm definitely looking forward to LLVM getting these capabilities. It'll move C++ development forward by light years, I reckon. (For example, Chrome takes an hour to fully rebuild in CI on what I suspect is fairly modern server hardware.)
Canarying should detect this. Not clear if they do this or the canary failed to report this.
Sharding by customers could help reduce blast radius. But maybe not by much of this was a very big customer.