Maybe there was a self-conscious "civilization" before the big bang. From my understanding we know very little to nothing about anything before the big bang.
Unless you believe that this universe is just playing out holographic on the event horizon of an N+1d black hole in our parent universe. The Big Bang was just the singularity birth of that one object.
Except not all dimension numbers have nicely defined physics and geometries. For example, 4+0 and 4+1 don't have symmetric pairwise particle interactions in the sense we have in 3+1.
I've been working with Ruby for 20 years, and I've not needed something like this. This feels like adding a lot of complexity for little practical benefit. The trade-off feels off. I don't think this is worth the additional complexity.
I disagree. I've also been working with Ruby for almost 20 years and quite a few times I came across a situation where I definitely wanted to wrap my library in a module for name-spacing and it was almost always cumbersome. I'd rather not wrap it and let others (or myself) use the library name-spaced under whatever name they choose. Right now, if you're working on a gem, you have to think really hard how to name it so its top-level classes an modules don't conflict with any other modules and classes found in the Ruby ecosystem.
> Right now, if you're working on a gem, you have to think really hard how to name it so its top-level classes an modules don't conflict with any other modules and classes found in the Ruby ecosystem.
Follow gem naming conventions and this is a non-issue -- both FooBar::Record and BazQux::Record can coexist for foo_bar and baz_qux gems, respectively. If a gem is defining other top-level constants outside of their gem module, then that's considered against convention, i.e. bad practice, and the language should not be modified to allow such a thing.
I'd like to hear of a real use case for namespaces that existing conventions don't already solve.
Wrapping `Record` in `Foobar` or `BazQux` is just not a good pattern, even if it's common place and is included in current guidelines. Most of the time, you don't have gems that would use the same class and thus would rather not type a long name-spaced name every time you want to use `Record`. But when you do have gems with conflicting classes, you'd have the power to wrap one of them or both in a short namespace identifier and be in charge of the naming. Using modules for wrapping is a pattern currently, but modules should primarily be used for mixins, not name-spacing.
If you don't have a conflict, and the module is wrapping everything, nothing stops you from doing "include <yourmodule>".
Similarly, if you want to group modules that don't conflict together under a short module name, nothing stops you from doing:
module MyGroup
include Module1
include Module2
...
end
In other words, wrapping them doesn't remove your ability to use short non-namespaced names.
Using `include` of specific functionality into a class that will use it is furthermore an idiomatic way of avoiding that extra typing without polluting the global namespace
For that matter, you can often achieve close to what you're arguing for as well without actually making any changes to Ruby:
def wrap_load(path) = class_eval(File.read(path), path)
module Test
# some_file.rb will act as-if defined within Test
wrap_load("./some_file.rb")
end
You can do better than that, to get closer to emulate `require` rather than `load`, and handle dependencies.
Overall, I think the fact you can do these things suggests you could probably write a good enough plug-in `require` monkeypatch suitable for the rare cases where `include` from within a class or module without needing a language extension.
Realistically it would be FooBar::Record and ns1::FooBar::Record too, unless gems start replacing their top level module FooBar with FooBar = Namespace.new.
And why would you bother doing that when it would be a breaking change?
On a purely syntactic level snake_case gives me a clear indication this is something different (not a module). But the underlying mechanics of how namespaces would work, as I've mentioned in another comment, is that the choice of picking namespace identifiers is shifted to users of gems/libraries. I think the proposal might further benefit from maybe giving libraries default namespaces and, for example, Go does exactly this: you can import a package with the namespace prefix defined by package author, but you can also change it or import it without a namespace prefix at all.
>Avoiding name conflicts between libraries: Applications can require two different libraries safely which use the same module name.
>
>Is this a problem that happens on a regular basis? I believe Ruby has a pretty well established convention for libraries to expose a single module with a name that correspond to their gem name.
I really don't think we want to make it easier for newbies to alter gem naming conventions and run multiple versions of a gem within the same project, this sounds like a genuine nightmare to me. I've found from jumping in to fix broken and crippled rails projects for startups that the fuckup surface area is already high enough.
I've been working with Ruby full time for pretty much 20+ years as well and there's many parts of it I've never used, although when I need one of it's many features, I love that it's there. I just used a refinement for example last month for the first time to help clean up some code that I didn't want polluting top level classes everywhere.
Hadn't thought of it in that way, but there's some merit to that if you include government, police & power in general. Law enforcement needed really high penalties on counterfeiting money and check fraude to make cash and checks work. And I guess some of that is still the case with credit card fraude.
"Fraud," and there is no historicity to the idea that counterfeiting and adulteration only became a problem with the introduction of paper instruments. Indeed those replaced specie in considerable part to reduce opportunities for chicanery! Gold is gold, after all.
That could already be the case. The initial release is from 1990, so the codebase is at least 35 years old.
I don't have a good guess for the average age of software developers at Microsoft, but claude.ai guesses the average "around 33-38 years" and the median "around 35-36 years old".
Indeed. Runtime is typically a lot faster than Rails. Response times in dev mode are <1ms which is fun to see coming from Rails :)
My personal dev speed however is a little slower in Phoenix than Rails. And I've been using Rails for almost 20 years and Elixir for 10.
There are categories of problems that are much easier in Elixir (High concurrency, low latency, etc) and for those I would always pick Elixir. If you're planning "just" a crud like web application, Ruby will probably get you to product-market-fit faster. There are (well maintained) gems for almost everything.
Really nice! Thank you for including a youtube video. It's a little unfortunate that you do time cuts between your "prompt" and the response. I'm curious if you were waiting 0.5s or 10s to get the response. I think the usability/fun of this stands or falls with that latency.
Splitcast Technology built this in 2012. The company folded (couldn't find revenue, and had founder struggles ) but as far as I remember the tech worked. It still needed a lot of seeding nodes, but a significant chunk of the bandwidth was provided by the "viewer peers".
Key part of that tech was that it synchronized the playback between all peers. That was nice for stock market announcements and sport events for example.
As long as "a lot of attempts" take longer than the time it'll take the sun to expand and envelop the earth, that's not really a problem.
Every form of authentication is either subject to "a lot of attempts" or trivial DoS (for when you rate limit the login API so now admins can't log in either). The principles behind modern authentication are mostly "how do we make verification require even more attempts if the attacker doesn't know the password".
What is "a lot of attempts"? I'm no expert in cryptography, but there's many orders of magnitude difference between a distributed bruteforce of a known hash, and bruteforcing over the web.
"a lot of attempts" is doing a LOT of heavy lifting here.
If your password was a set of random letters (both upper and lower case) and numbers and 20 characters long, then even if you could attempt 1,000 logins/second (a very high number for an online attack), it would take a whopping 2,232,000,000,000,000,000,000,000 years.
If you could do 1,000,000 logins/second, an absolutely absurd number for an online attack, that only takes 3 zeros off that number.
The rule of thumb that employees actually cost a business roughly twice their salary is based on two things:
1. Retention. Hiring costs are “huge”, and so if you have a higher or lower average retention, may make up a disproportionate cost compared to salary. Ramp up time and institutional knowledge loss is no joke either.
2. A spread of average wages. 500k is not average, and a huge number of the costs are relatively fixed. $1,000 a month worth of software licensing isn’t an uncommon number and is fully 1/3 of the salary of a $3k a month or $36k/year junior clerk. It’s peanuts when you look at it next to a $500k/year salary. It may be that the clerk is, all in, costing the company 3x their salary after indemnity insurance and so on. The dev will never reach 10%.
It's really not at scale. It's on the order of 500$ a month per dev for "gold" level care for a company of 50 people. I'm sure it's less the larger you get.
It might depend on the state and the age pool but I have to pay a percentage and based on that it's more like $10k/year. So you are almost 2x undercounting
... But maybe if the average employee of a company is 25 they could get a better deal
reply