One difference is in "dead connection detection". How do you know that your AMQP connection is down? At some level you're polling, whether that be TCP keepalive, application keepalive or something else.
If you're doing polling, you're actually back at the same pre-webhook place - polling their server on some timescale which is a compromise between latency and load.
Yes, a TCP keepalive is generally cheaper than an HTTP long poll request, but only by a constant factor.
The setup process for our Cocoon home security device (https://cocoon.life) uses an audio transfer from the phone to pass wifi details and a token to bootstrap setup and a trust relationship.
After some time spent tuning and with some forward error correction, the reliability is good and it's not too harsh on the ears. Response from beta testers has been pretty positive.
Isn't the calculation (assuming (for simplicity) one CPU, running at 100% only on application load)
60 requests/sec => each request takes 1/60s CPU-second == 16.6ms of CPU time to process? (This is time-on-cpu, and doesn't include time-waiting-for-cpu. I think time-on-cpu is the number you want if you're looking at optimising your codebase)
> each request was taking 6 / 60 = 0.1s = 100ms of time using-or-waiting-for-the-CPU.
(emphasis mine)
In my original read, I thought her core count was greater than her load, so that would also be her direct time-on-cpu. Now I'm not so sure.
And while time-waiting-for-cpu might not be important to optimizing the codebase, you probably still want to know that your serving processes are waiting for CPU; after all, it is that number that your user's browser is seeing (at least, between the two it is moreso that one). Such a result might indicate a larger machine or more machines are required, for example.
Pretty much. If you want to get fancy you can make assumptions about the distribution of request arrival times and use the mean queue length of 6 to estimate the fraction of the time when the queue length drops to zero; you probably come out somewhere around 10% CPU idle time, so each request is taking 15 ms to process rather than 16.6 ms.
But cpu-time-per-request is definitely the number you want to pay attention to. If you cut that by a factor of 2, you won't decrease the load average from 6 to 3; you'll decrease it from 6 to less than 1.
> your business plan must make sense to the investors.
Would locking in dividends in the business plan allow the investors to become "directly involved" and make investment in a "going concern but not a unicorn" a rational choice for an investor?
Is there a significant benefit to this over using a new name for the result?
y = (x + 1) / atan(x) - x;
x = y; // If you're in a loop and updating some value
If would hope/imagine that a compiler could reduce the two to the same code - and I don't think it reads any worse. Is there a downside beyond the extra line of code?
To be clear: the assignment isn't the optimization; its the use of a symbol for 'x' on the rhs so that the compiler can recognize it. To illustrate that
X &x = { complex reference expression for x }
y = (x + 1) / atan(x) - x;
Now the compiler has the clues it needs to write good code.
And with language support I don't have to introduce new names for each occurrence; just one name like <LHS> that readers can quickly learn.
Because from my point of view, it could be only the case only if the scopes are weak or underused, or names are bad (too short, ect).
After SSA is all about one-use names.
Is the "but at that kind of energy output we'll boil the planet" limiting case really a hard limit though?
If we're at the stage of having super-dense energy sources and super-tech, couldn't we we use those to cool the planet effectively?
Use a heat pump to move that heat off-planet? Drop some icy asteroids in the seas? Take compressed hot atmosphere off-planet in super-blimps, run it through a heat exchanger on Titan and then bring it back?
In the end the resources available to a civilization can only expand as the third power of time due to the speed of light limit. That means that the laws of physics do prohibit an indefinite exponential increase in energy use, baring new discoveries that look pretty unlikely.
In the pre-smtp days, email links were over UUCP, which was often relayed via dial up lines which connected intermittently, on a schedule (e.g. every hour).
Add in the situation of disk quotas for university students, but not email quotas, and you get the means, motive and opportunity for people to use the email spools of a mail servers around the world as an annex to their home directory...
Since the statute of limitations has expired I will admit to creating data archival systems using uucp as well as smtp. Back in the 80s before the morris worm and spam, when the internet was just for us geeks, you could usually connect to anyone's mail server on port 25 and hand it mail to be delivered just about anywhere. Sendmail was more concerned with connecting up all of the various networks than it was about authenticating a destination, so you could connect to just about anyone's mail server and ask it to deliver mail back to your server; of course your server would be conveniently offline for a period of time or drop the connection after checking the rcpt (chunk id) in the envelope until you either wanted the data back or needed to accept it so that it did not bounce. I probably have opq ("other people's queue") on a nine-track tape in the basement but have no way of loading it up again; anyone know what the odds are that a well-cared for tape reel circa 1986 (bad tar format) is actually worth sending somewhere to get read?
Similarly, in the early AOL days, there were several "warez" distribution lists going around which had the pirated software directly attached to the emails (not links to external websites). As long as the emails stayed in AOL, you could forward them instantly (there must have been some deduplication on the back end making the attachments basically a link to the same files).
> The founders of the USA were not immigrants, they didn't immigrate to join Indian tribes, they were founders of a country.
Sorry, I'm afraid I don't understand the distinction you're drawing.
If some people move from one country to another, and the other place has people living there already, what characteristics of the move classify it as founding a country rather than immigration?
I wish I could believe you weren't serious, especially today, July 4. But, oh well.
The fact that there was no existing country here in the sense the European migrants understood one to be, and the fact they didn't integrate with Indian tribes or attempt to.
The fact they literally founded a country, established the institutions intrinsic to, and expounded extensively on said founded country and the characteristics necessary to make said new country successful.
I'm interested in the details of the variable scoping problems are. I'd have thought that perl's explicit declaration of lexicals with 'my' was better than "declare at first use" of javascript/python/ruby.
[There's a class of mistake where you typo a variable name which you can make in those languages which gets caught in a language where you have to declare variables.]
I share your confusion over that assertion. I actually miss Perl's excellent block scope capabilities when working in almost every other similar language. I imagine most other languages have gotten it, by now, but a few years ago when I was using Python for a few years for a client it didn't have block scope and I missed it. A brief googling actually hints that maybe Python still doesn't have block scope, which is pretty weird, so I assume I'm googling wrong.
But, perhaps discussion of scope problems in Perl is just when talking about very old code. Perl had some scope atrocities back in Perl 4 and some weirdness in early Perl 5 days. But, for the past 15 years or so, it's been very predictable and has improved (like the syntactic sugar of "my" declarations in foreach or while expressions).
I seem to recall that Python not only didn't (doesn't?) have block scope, it tended (tends?) to produce a completely, blithely cryptic error message when the programmer assumes that it does have proper block scope and writes a program that fails as a result. Discovering that was memorable.
One weirdness I encountered recently with nested functions:
sub a {
my $v;
sub b {
# captures only the first
# instance of $v
}
}
This is because named subroutines are created once and variable captures are resolved at that time. I expected the more normal capture semantics you get with anonymous subs, like
sub a {
my $v;
my $b = sub {
# a new $b each time
# captures each $v
}
}
The lexical scoping rules are precisely what causes this behaviour. The first example defines a nested function which has access to the variables in scope when defined (as you remark).
It's little different than this block defined outside of any function:
{
my $v;
sub b { ... }
}
Whereas an anonymous function is "defined" each time the enclosing scope is evaluated.
Thanks - useful point. But, I think that's more of an oddity of nested, named subs than anything else. The usual anonymous subs close as you'd expect (as you say).
If you're doing polling, you're actually back at the same pre-webhook place - polling their server on some timescale which is a compromise between latency and load.
Yes, a TCP keepalive is generally cheaper than an HTTP long poll request, but only by a constant factor.