The first gen XReal glasses are similar in that you need software running on the host to get anything other than dumb monitor mode. With these newer models they've moved a bunch of the functionality into hardware on the glasses themselves, so you get virtual monitor and wider device support out of the box.
There are a couple of projects that are trying to get better open source support of the Airs on linux; I've not kept up with their progress.
I really liked the new Xreal glasses because of the built-in head tracking. But I was getting a lot of drift over time such that after anywhere between a few minutes and maybe half an hour, the center of the screen wouldn't be straight ahead anymore. Meaning I would need to reset the display location far too often. I ended up returning them because of it.
Oh ick. That's not so good. Hope they can firmware-update their way out of that. It's odd that they've got that problem because my experience with the tracking on the Airs with the desktop software driving it has been pretty much fine.
Waiting for the XReal One Pro to vibe code using Aider and Samsung DeX in a Termux terminal while walking my countries national parks (just something I would like to try)
I tried something like that 2 decades ago with a 640x480 monocular strapped to my sunglasses, with interpolation i could use a 1024x768 resolution in combination with a arm based pocketpc with host usb and compactflash video-out.
I used it for reading and 'fast' offline wikipedia/tr3 database search with both frogpad and twiddler2 and some voice commands.
The problem is that you can see the foreground and depth ok as the monocular screen on my left eye 'merged' semi-transparently due to brain processing. I assume this is a bit worse on the XReal.
The main issue was that when walking you make a slight sinus wave up and down compared to the foreground. You don't notice this usually, but with a paragraph of text or code positioned in front of your eyes it becomes very distracting.
One solution, using a mode of transport that doesn't involve moving up and down slightly, for example using a bicycle or car for transportation.
In both circumstances, the latter being the most problematic, it's not advisable safety wise (or even illegal) and a screenreader solution is better. I had the idea of using Emacsspeak for this, or do a smart speakup echo from the commandline.
Another solution is using RSVP, or using both RSVP and text to speech.
Samsung DeX is great though, Motorola, Huawai and recent Android have support for desktop mode too (if the phone supports video-out).
The latest Xreal glasses provide 3DoF tracking natively in the hardware. Your eyes in this case would perceive the screen as stationary as you and it inevitably "bounce" during motion.
I still don't recommend walking around with them on while reading.
I will also plus one Samsung Dex. It really is amazing to have a desktop like experience with just glasses on, and feels properly cyberpunk.
That's so cool, thanks for the detailed reply. I agree walking around with a big bouncy screen in front is probably not going to be very ergonomic. Still I can't let the idea go completely. Going RSVP+TTS might indeed be a more viable approach, certainly something to test with the current wave of AI/LLM agents.
I do remember a website a long time ago about a guy walking around with a setup like this. Seemed like it involved lead acid batteries and a lot of weight.
Was really interesting then when desktops were as big as large pizzas.
> I’ve found that simply converting a standard, normalized PostgreSQL database doesn’t work well in practice. Instead, you need to design your database with YugabyteDB’s distributed architecture in mind.
I wanted to reply to your previous posting but ycombinator does not allow for comments on older posts.
To put it simply, we spend almost 3 weeks trying out YB for your own usage and ran into massive performance issues.
The main issue seem to be that anything sharded (what it is by default), will perform sort/filter/and other operations only after retrieving sufficient data from those shards.
The results is that this massive impacts data. The YB teams somewhat hides this by spending mass amounts of resources on Seq Scans, or require you to explicite design Index's pre-sorted.
In our experience, a basic 3 node YB cluster barely does 7000 inserts/s, where as Pg instance does easily 100.000+ insert/second (with replication). Of course there is the cheating. Like "look how we insert 1 million / second" (3x), with 100 heavy AWS nodes. Ignoring that they are just inserting rows with no generated IDs ;) If you put that in front of pq, your easily hitting 150.000 insert / second on a single node.
There are issue like trigrams taking up 8x the space compared to pg trigrams (no row level packing). Materialized joins that are pre-sorted, are not respected as inserts are done by hash distribution. So item 1 can be on node 3, item 2 on node 1, ... what needs to be fetched and resorted, killing the performance benefits.
A optimized materialized join pulls out data between 0.2ms to 10ms, where we have seen YB doing between 6ms to minutes. On the exact same dataset, inserted the same way, ... on only 100 million rows of data. What you expect to be those databases their playgrounds.
Plugs that are broken in the new 2.25 version.
We lost multiple times t-master and t-server nodes. Ironically, the loss of the t-master was not even reported in the main UI. What can only be described as amateuristisch.
CockroachDB is another series of similar issue with latency, insert speed, and more ... combined with the new horrible "we spy on you free license, that we may or may not extend every year." We found CRDB more mature in a sense and less a Frankenstein's monster of parts, but lacking in postgres features (or working differently).
In essence, as long as you use them as mongoDB like denormalized databases that can run SQL, great. The issues really start when your combining normalization and expecting performance.
And the resources that both use means you need to scale to a 20x cluster, just to reach the equivalent of a single pq node. And each of those YB/CRDB nodes, need to have twice the resources then the pq node.
In general, my advice is to just run pq, replicate/patroni, maybe scale to more pq clusters where you separate tables to different clusters. Use the build in postgres_fdw to create materialized read nodes to offload pressure and load balance. Unless you are running reddit at scale, the benefits of YB/CRDB totally outway the tons of disadvantages.
The uptime, easy of upgrading, and geo-distribution is handy, not going to lie. But its software that really only gets benefits for companies that reach a special demand or high scale and even then. I remember that reddit ran (still runs?) on barely a few DB servers.
What is even worse, is that as you stated, you need to design your database so much around YB/CRDB. that you can use the above mentioned pq solutions to get way more gains.
There are a couple of projects that are trying to get better open source support of the Airs on linux; I've not kept up with their progress.