Hacker Newsnew | past | comments | ask | show | jobs | submit | esmi's commentslogin

It’s a nice tutorial on base plus index addressing but from the title I expected a tutorial on pointer tags as x86_64 is what makes tags even possible, i.e. we have a 64b address space but not 2^64 memory locations.

https://www.mikeash.com/pyblog/friday-qa-2012-07-27-lets-bui...

And for ARM.

https://www.mikeash.com/pyblog/friday-qa-2013-09-27-arm64-an...


> i.e. we have a 64b address space but not 2^64 memory locations.

Except the designers foresaw this and established Canonical Addresses[0] to prevent people from using that "unused" space for tags. The space is explicitly reserved. This is probably why LuaJIT uses NaN tagging of doubles instead of tagged pointers.. even though that causes an issue of it's own[1].

[0]: https://en.wikipedia.org/wiki/X86-64#Virtual_address_space_d...

[1]: https://github.com/LuaJIT/LuaJIT/issues/49


On ARM you can turn this off with TBI, FWIW.


On Arm, PAC and MTE eat that space instead. (and you'll have Morello with 128-bit pointers soon, let's see if it'll end up being considered as productible for future Arm designs)


Actually, Objective-C's tagged pointers mostly rely on malloc's alignment guarantees.


This is the case for most tagged pointer systems. Indeed most of them come from a time when 32 bit support was required.


Sure. Some software has to exist to make use of this system, for example something has to create the tag in the first place, and mall ic is a part of that, but the large address space is what makes them possible.


It used to be fairly straightforward to beat a chess engine... in the 80s. https://www.chess.com/blog/FangBo/how-to-beat-an-80s-chess-c...


Is it like Automator for enterprise applications?

http://macosxautomation.com/automator/


Only in the sense that it automates stuff. RPA is differentiated by its use of machine learning to "observe" your workflow and automatically create automations, even where it's hard to create a standard macro.


I’ve got a fairly comprehensive AutoHotKey script going on at work to macro the fck out of repetitive tasks in the CAD / CAM setup I use.


A digital computer is definitely a FSM. In fact, creating the state diagram of a simple calculator, and mapping that to digital logic by hand, and then building it on a breadboard used to be a staple of most introductory computer architecture courses. Now they just simulate everything. :)

This is the technique http://faculty.etsu.edu/tarnoff/ntes2150/statemac/statemac.h...


Personally I waffle on this myself. My instinct tells me it’s bad but then I try to decide what trading cadence I would limit it to. Assume there is a CEO who wants to do something nefarious to manipulate the stock price. If they make a decision how fast can a company actually act on it? Would or could they make the company act on 1 hour boundaries? Even if we limited trades to once per week is that the long term we want a CEO to consider? At what time frame does a stock stop being a stock and convert to something else? Then I decide even by minute trading is not really something that management can even consider, even if they want to, so there is really no point to artificial limits imposed by law and take a more laissez faire attitude.


Don't limit cadence. Just tax transactions, and cadence will limit itself.


I waffle on this one too. Institutions that do HFT have a huge capital cost and pay large fees before they even make their first trade. They also employ programmers, traders, etc. Basically, these taxes are going to need to sum to something really big to dissuade them. Too small, they ignore it. Medium, the market will just consolidate. Large, you'll stop HFT, but at what consequence? The probability of unintended consequences is high.

This is when I decide I don't really understand the market as well as I think, and I should stop solving the world's problems, and go back to designing circuits.


A tiny tax on each order (bid, ask) entry would cut the total volume of orders by a huge factor, cost them only a small fraction of their takings, and probably fund free college for everyone in the US.


> Its a ton of random math that (most) of which you don't use again since you use modeling software.

You absolutely need that math because you need to know when the modeling software is giving the wrong answer. You’re supposed to do quick and dirty calc by hand (ok fine I use mathematica) in a simplified system, then you refine with numerical software and compare the two. It’s shockingly easy to get the wrong answer with numerical CAD.


Not a mech eng but I've taken several eng classes and do a lot of DIY stuff. I've been designing a swing set/exercise rig for myself in Inventor and using FEA to sanity check my beam sizes for the given loads, just cause why not. Since I already had it parameterized, I wanted to see what load it would take with legs made out of 1.25x5.5 boards, just cause. The sim showed it would take several hundred pounds with almost no lateral deflection. Hmmm.

Anyone who's worked with decking boards knows they are pretty wobbly by themselves. I'm staring at the results, intuitively knowing they're dead wrong. So I model a plain column of one of these boards 16' long and 2000lbs, straight down. Zero side deflection.

Ah, I realized, it doesn't model buckling.

Map != territory.

It's always important to have multiple perspective of inference on a problem.


> If a researcher can determine that a tweet came from a bot

That condition is the key. Are you sure they can do that? It’s very tough to do with certainty.


> "If everyone else is such an idiot, how come I'm not rich?"

Unfortunately the answer to that question is, I spend way too much time on hacker news.


How about a 2 state 3 symbol Turing machine? That’s pretty simple, and universal too.

https://en.m.wikipedia.org/wiki/Wolfram%27s_2-state_3-symbol...


It's not directly comparable, since this Turing Machine is not a self-interpreter in the sense of interpreting arbitrary programs in a language of Turing Machines.


First off, I admit I didn’t do my homework so I have no idea what I’m talking about, but couldn’t the Turing machine make a Turing machine interpreter and therefore be a self interpreter? It is universal, no?


There's different notions of universality.

In the more abstract one, you can emulate arbitrary computation by preparing the system (in case of the 2 state 3 symbol TM, its tape) in an appropriate configuration, letting it run until some the configuration satisfies some condition, and then extracting the result from the final configuration.

In a more concrete one, you have a language of programs, and a universal program takes a program description as input, and interprets it. I give a slightly more formal definition of such a notion of universality, as applicable to Algorithmic Information Theory, in [1].

[1] https://tromp.github.io/cl/Binary_lambda_calculus.html#Unive...


As tromp has a meta-circular implementation of lambda calculus maybe that Turing Machine could be implemented in lambda calculus and we could have an objective measure of which one is "simplest"?


It takes 829 bits of binary lambda calculus to interpret BrainFuck, which is very simple programming language modeled after Turing Machines. A self-interpreter in BrainFuck takes well over a thousand bits though [2]. Lambda calculus is not only very simple, but also very expressive. While combinatory logic, with only S and K, is even simpler than lambda calculus, and also has a trivial binary encoding (00 for S, 01 for K, and 10 for application), the shortest known self-interpreter is 263 bits, appreciably larger than for lambda calculus. My IOCCC entry [3] has more examples of the conciseness of binary lambda calculus.

[1] https://tromp.github.io/cl/Binary_lambda_calculus.html#Brain...

[2] https://arxiv.org/html/cs/0311032

[3] http://www.ioccc.org/2012/tromp/hint.html


How does this logic compare to say Forth, APL or Joy? As far as I'm aware, they are all combinatorial languages - are they effectively implementations of SK combinators?


"The Theory of Concatenative Combinators" (http://tunes.org/~iepos/joy.html) connects combinatorial logic with Joy.


FWIW, I think concatenative notation is the simplest useful computational framework. ( https://joypy.osdn.io disclosure: it's my project.) But I'm not mathematically sophisticated enough to make the formal argument.


> web browsing, text editing and terminal.

If that’s all you want try an iPad2. I think it would make a better “laptop” than a RPI. RPI doesn’t have a battery.


Good luck testing your design in several browsers on an iPad :-/


Where exactly did you get "testing your design in several browsers" from "web browsing, text editing and terminal"?


Could always use something like https://www.browserstack.com/ if you really want to.


Does it support opening Dev tools? If so, terrific.


Its slightly too laggy to be that useful like that imo. Sure you can pop open the console and get some info, but prolonged manual use is painful.

Its fantastic automated tool however; and the live interface is great for irregular checks


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: