Juergen Schmidhuber and his students came up with much of the basic NN elements, such as RNN, LSTM and others without behemoth resources (and several years earlier). Ttbomk, the one thing they DIDN’T come up with was the transformer; however it is very likely someone would have within a 5-10 year time frame.
I well aware of Schmidhuber, still the scaling of the compute was critical. The reason Schmidhuber didnt go all the way is still scaling/capital, which accrues to monopolies who can afford wildly speculative research. Also, LSTM, RNN, etc, while effective for their time, were dead ends.
I'm not saying that, I'm saying you need a company with huge amounts of cash on hand to spend on R&D. The monopoly provides a safe space for it, like it or not, this system has worked.
Nim has solved this in a very practical way, which works great - case insensitive except 1st letter, and underscore insensitive. So hello_world and helloWorld and hello___wOrlD are the same identifier, though HelloWorld and hello_world are not.
Many people complain that this is horrible, but not people who have actually tried it. It works very well, lets you bridge over differences in C++ library conventions, Unix C conventions and Microsoft conventions in a project that interfaces all of them, without getting users upset. (There’s more to it - consistency for some identifier can be enforced, there’s a smart source reformatted, and more)
I think Nim is much more of a pascal successor than Go ; It retains some of the syntax and feel, although it uses Python style indentation instead of begin/end.
It is by no means pascal-simple - although the subset used by 95% of programs is.
It does have everything you need including macros, unsafe access, minimal OO but these are almost exclusively used by library developers.
It’s readable / writable in any Linux system if mounted properly - the SHR is just an (extremely well done and convenient) UI for setting up standard raid partitions in a way that uses the entire disk.
Thunderbird doesn’t do most of this. It uses tags for emails and categories for events. Also, no way to view all items with same category. Can’t drag contact to calendar or task list. Can’t set specific reminder times. …
Interesting. Surprisingly, it decided to encode the multiplication and division as addition/subtraction loops, which is incredibly inefficient - multiplying e.g. 32,000 by 32,000 (ignoring the overflow ...) will take 1,024,000,000 iterations, so thousands of seconds on the speccy's humble 4Mhz Z80 (8 instructions, each taking at least 2 T-states ...)
Here is the multiplication loop (division is similar but in reverse, subtracting instead of adding).
mult_loop:
; Check if BC is zero
ld a, b
or c
jr z, mult_done
; Add HL to result
ex de, hl ; DE = multiplier, HL = result
add hl, de ; Add multiplier to result
ex de, hl ; DE = result, HL = multiplier
; Decrement counter
dec bc
jr mult_loop
No, but I would expect to do binary long multiplication and division, which have been used since before the intel 4004, and which it has surely seen in many Z80 and 6502 code bases fed into it.
oh interesting. One thing about the specy, is it's incredibly hard to hook in the eval technique (see middle/bottom of https://ghuntley.com/specs/ ) as deployment is manual/human. So I had to drive it all by hand and just accept it as afaik there's no testing framework. If this was another programming language I would have taken the approach of creating a `cargo bench` over the application and then looping _that_ back into the LLM to identify and resolve the performance issues. I've done it; it works well. Just not on the speccy :P
"Who cares if it's slow, at least the code is correct" - someone who never wrote 8-bit code in their life
It at least managed to regurgitate the fast multiply-by-10 algorithm, which of course exists in countless examples all over the web. But then, instead of maybe repeating that twice to multiply by 100, it produced absolutely insane code which - even at a glance - can't be correct unless the input happened to be zero:
; No decimal point, multiply by 100
ld b, h
ld c, l
add hl, hl ; * 2
add hl, hl ; * 4
add hl, hl ; * 8
add hl, hl ; * 16
add hl, hl ; * 32
add hl, hl ; * 64
add hl, bc ; * 65
add hl, hl ; * 130
ld b, h
ld c, l
srl b
rr c ; * 65
add hl, bc ; * 195
add hl, hl ; * 390
add hl, hl ; * 780
srl h
rr l ; * 390
add hl, hl ; * 780
srl h
rr l ; * 390
srl h
rr l ; * 195
srl h
rr l ; * 97.5
add hl, hl ; * 195
add hl, hl ; * 390
srl h
rr l ; * 195
add hl, hl ; * 390
srl h
rr l ; * 195
add hl, hl ; * 390
srl h
rr l ; * 195
add hl, hl ; * 390
srl h
rr l ; * 195
add hl, hl ; * 390
srl h
rr l ; * 195
add hl, hl ; * 390
srl h
rr l ; * 195
add hl, hl ; * 390
srl h
rr l ; * 195
jr done_convert
First a bunch of random left shifts and additions that overshoot the target, then an even more directionless attempt of correcting the result that seems stuck in a loop until it just gives up. Kind of reminds me of "PanicSort" (xkcd.com/1185)
And as you mentioned, it ignores overflow, which might be a real problem since even if it worked correctly, it could only handle amounts up to $655.35. A better solution would have used BCD arithmetic, which the Z80 provides some dedicated instructions for.
The reason the web won is that it does NOT need specific clients for every single thing.
Essentially every kind of service (e.g. email, blogging, q&a, live news) is available without JavaScript, thus, using a pure html through http interface. The problem with a-standard-protocol-per-service is that new uses arrive in a distributed, unplanned manner.
Looking at instant messaging history is instructive: there were 3 protocols in major use (aim, msn, icq) about 20 other in common use. The “standard” committee was sabotaged by the major players for years and eventually disbanded, culminating in the only open option in some use (not major use, just some use) - XMPP - to win by default, except the providers explicitly chose to NOT interop (Facebook, WhatsApp when it was independent, Google chat).
People still use clients for certain services otherwise many sites wouldn't still be making them.
Of course you're right that this would all be hardcoded and would not allow new types of sites to work right away.
I don't know that you'd even need a protocol or service for each category of site. It would probably make more sense to use the same architecture for all types of services with something like a manifest to determine the mode. I think the challenge would be making the APIs public in a way that would be practical for the servers implementing them.
I think you misunderstood me. I'm not at all describing anything like a application runtime such as the web. By architecture I meant something more like server APIs that are flexible enough to be used as the backend in different kinds of sites.
For example: Instead of an API for microblogs and another for blogs and another for news sites it could just be one API with flags or something that determines which other calls are used and how.
So let’s say we have blogger with a blogger API, and twitter with a Twitter API, and by some miracle they agree on a merged-with-flags API.
Along comes Tumblr, and … the APIs don’t fit, in the sense that they significantly limit the planned functionality.
Now what? A new API! Or a committee to update the existing API!
Central planning doesn’t work for these things. And when you do finally have agreement on something (e.g. email), you either stagnate because no one can improve things, or get incompatible implementations because a significant player decides they will break the standard for some reason (e.g. recall and TNEF on outlook, which only work on outlook).
The internet started the way you describe (with finger .plan for “social” status, Unix talk for one-to-one, IRC for chat, email, nntp, gopher, ftp, Xwindows, RDP, etc etc). All of these is are 98% web based these days, because the protocol+client+server model cannot adapt quickly enough. The modern web with presentation layer + runtime code delivery + transport protocol does allow lightning fast evolution.
So we had finger, talk, irc, smtp, nntp, ftp, etc, all layered on TCP.
And now we have dozens of popular implementations of roughly the same functionality layered on REST/HTTP or JSON/WebSockets or whatever.
I suggest that the complexity is basically that same, from a programmers point of view. You define messages and a state machine and a bunch of event-handling …
The UI is now universal (mostly) but the programming model for HTML/CSS isn’t simpler than say Xaw/Xt: it is more capable, but putting together a decent UI for a browser-based email client is not substantially easier than doing it in Xaw/Xt.
With one exception: our programming languages are better, and the ecosystem of libraries and frameworks makes what would once have been weeks of work an import statement.
We could do the same things in the same time using custom protocols over at TCP as we do using JSON over WebSockets, using modern tooling, but the world has moved on. The entire ecosystem of libraries and services and network infrastructure channels you into using a vastly more complex stack.
The point of the flags, manifests or whatever is so functionally can set by the site. Like one site wants to support live chat and another wants to support bulletin board style posting.
The web is the best application runtime for making clients. However, I don't think it's existence invalidates the creation of these kind of protocols and server APIs. In fact some web standards such as RSS feeds could be described as such.
Free (liber) software is already independent of the US by virtue of being open source and free. In what way would your solution offer more/better independence ?
I am all for a production 20K trusted free+open computing base, but … I don’t understand the logic.
It's humanly impossible to know what a program does when it grows beyond the size anyone can read in reasonable amount of time.
For comparison, consider this: I'm in my late 40s and I've never read In Search of Lost Time. My memory isn't what it used to be in my 20s... All eight volumes are about 5K pages, so about 150K lines. I can read about 100 pages per day. So, it will take me about two month to read the whole book (likely a lot longer, since I won't be reading every day, and won't read as many as 100 pages every time I do etc.) By the time I'm done, I will have already lost some memories of what happened two months ago. Also, the beginning of the novel will have to be reinterpreted in the light of what came next.
Reading programs is substantially harder than reading prose. Of course, people are different, and there is no hard limit on how much of program code one can internalize... but there's definitely a number of lines that makes most programmers unable to process the entire program. If we want programs to be practically understandable, we need to keep them shorter than that number.
You have just given the rationale for STEPS, which I am aware of and agree with.
But the claim was that the EU should embark and find this to “gain independence from the US”, even though free software already gives you that independence.
So, my question is: in what way would this project make the EU less dependent?
North Korea reportedly has a Linux distribution, for example.
> even though free software already gives you that independence.
No, not in the way I'd want (and probably not in the way parent wants). For all the same reasons. If you are given something you cannot understand, you depend on the provider for support of the thing you cannot understand. Even if your PC were to be shipped with the blueprints for the CPU, you'd still depend on the CPU manufacturer to make your PCs. The fact that you can sort of figure out how the manufacturer made one doesn't help you to become the real owner of the PC (because of the complexity of the manufacturing process that will make it prohibitively expensive for you to become the PC true owner).
But, let's move this back into software world, where the problem is just as real (if not more so). Realistically, there are only two Web browsers, and the second one makes every effort to alienate its users and die being forgotten and irrelevant. Chrome (or Chromium and Co) are "free", but they are so complex that if you wanted a substantial change to their behavior, you, alone wouldn't be really able to effect that change. (Hey, remember user scripts? Like in Opera before it folded and became Chromium clone? Was super useful, but adding this functionality back would be impossible nowadays without a major team effort.)
So... the Chromium and Co aren't really free. They are sort-of free.
There are, unfortunately, many novel and insidious ways in which software freedom is attacked, subversion attempts come in a relentless tide. Complexity is one of the enemies of software freedom.
There are a lot of people in Europe working on KDE, including the really open web browser. They are the provider.
The problem with a web implementation not being a small thing is inherent due to the size of the spec. You can definitely make a browser with many of the same functional capabilities in 20K lines but it won’t be showing the existing web or be a replacement for chrome.
Many companies have a customized Linux kernel which means you aren’t actually dependent on the provider.
In my opinion, the GGP’s claim the EU should fund their STEPS-like project because “it will help avoid American dependence” is … not in line with reality, just a straw man argument to grab available funds.
Other than that, I agree it’s desirable for everyone to have such a thing. But not in any way because of American hegemony over Chrome.
Is this early 2000’s? Dr.Sbaitso circa 1992 sounded better. AT&T had an offering in 2002 that sounded completely natural.
reply