Hacker Newsnew | past | comments | ask | show | jobs | submit | nly's commentslogin

SQLCipher + a hard-coded or generated key in your app.

Is that a typo? TCP was 2.5x faster?

I presume this is down to much larger buffers in the TCP stack.


Not a typo & yes quite likely. I haven't tuned nbd/vsock at all.

Edit: I patched both ends to change SO_SNDBUF and SO_RCVBUF from the default (both 212992) to 4194304, and that made no difference.


Given how slow protobufs and grpc is, I wonder if the socket transport would ever be the bottleneck to throughput here.

Changing transports means if you want to move your grpc server process to a different box you now have new runtime configuration to implement/support and new performance characteristics to test.

I can see some of the security benefits if you are running on one host, but I also don't buy the advantages highlighted at the end of the article about using many different OS's and language environments on a single host. Seems like enabling and micro-optimising chaos instead of trying to tame it.

Particularly in the ops demo: Statically linking a C++ grpc binary, and standardising on host OS and gcc-toolset, doesn't seem that hard. On the other hand, if you're using e.g. a python rpc server are you even going to be able to feel the impact of switching to vsock?


> Given how slow protobufs and grpc is, I wonder if the socket transport would ever be the bottleneck to throughput here.

I think this is supposed to be option for when you want to pass stuff to host quickly without writing another device driver or using other interface rather than replacement for any rpc between VMs. "Being fast" is just a bonus.

For example at our job we use serial port for the communication with VM agent (it's just passing some host info about where VM is running, so our automation system can pick it up), this would be ideal replacement for that.

And as it is "just a socket", stuff like this is pretty easy to setup https://libvirt.org/ssh-proxy.html


Works with implementations and not APIs though.

A bad API can constrain your implementation and often can't be changed once it's in use by loads of users. APIs should be right from day one if possible.


I would add the nuance that the possibility of controlled migration from one versioned API to another should be right from day one, not necessarily the first API version.

The reason you wouldn't do that is IP doesn't give you a mechanism to share an IP address with multiple processes on a host, it just gets your packets to a particular host.

As soon as you start thinking about having multiple services on a host you end up with the idea of having a service id or "port"

UDP or UDP Lite gives you exactly that at the cost of 8 bytes, so there's no real value in not just putting everything on top of UDP


Objective C++ was great.

Rename a file to .mm and start using Objective C APIs. Very good interop


It's hard to see how they become profitable enough to justify current valuations.

The numbers are just mind boggling even in the optimistic scenario.


If they are able to actually successfully pivot into ads as a business model, it's very easy to justify the valuation: Just look at Google/Meta.

I'm personally skeptical that they are able to pull off ads, at least on the short timescale they likely need to. They more or less have to nurture/disrupt a whole SEO industry, and the way big corps allocate advertising budgets are very slow to shift (many are still struggling to find their footing in social media advertising).


> If they are able to actually successfully pivot into ads as a business model, it's very easy to justify the valuation: Just look at Google/Meta.

Ads are not just a switch one can turn on. Firstly, you need to build a decent ad serving/targeting/pacing engine. Secondly (and more importantly) you need to hire a shed ton of sales people (in many, many geos) and then ramp them all up (difficult if you're building the product at the same time).

And then you need to keep at it for 3-5 years minimum before you'll finally get the bigger/more conservative brands/agencies/etc to buy in properly.

At that point, you'll make decent money, after accounting for all of the costs. I'm not sure that you'll make enough money, but it would definitely stanch their bleeding a little.

tl;dr if they haven't already built this ad product, it's unlikely to make a material difference before 2030.


Why should they give a fuck about the SEO industry? You think Google "nurtures" the SEO industry? They're selling ads to businesses and displaying them to consumers. SEO folks are just parasites in the system.


Yes of course Google nurtures the SEO industry. The first hit I get when I google SEO is a best practice site from Google themselves.

The person/department/agency that is responsible for doing SEO is the same one that is responsible for spending ad money on Google. It's in their best interest to internally sell "search" (no matter if organic/inorganic) as a good advertising channel and that creates great stickiness for Google's ad business.


I can see a world where if everything they do pans out, on average 1B "entities" end up paying 20$/mo to openAI, via the myriad of integrations they end up supporting. Where entity can be a user, and one human can have many entities (i.e. a work account where the company pays, a personal account, etc). That'd be a quarter of a T$ revenue / year. Dunno, while the number is so large it's hard to compute, it somewhat seems plausible.


And then they'll ruin it all by injecting adverts to get an extra $20b a year in revenue


Their goal is obviously to take a fraction of the world's economy, proportional to the productivity lift they can attribute to themselves. That's in the trillions.


How does one take a fraction of the world economy?

Microsoft sells services pretty much everywhere on earth and has a near-monopoly on PC OSes, enterprise email, productivity suites plus a giant cloud computing service....and their annual revenue last year was just $245 billion.

OAI can't get anywhere close to that, not least because they have multiple competitors and several large corporations (MSFT, Google, Meta) would rather dogfood their own solutions than use a rival's.


Fortunately many apps seem to be moving to native webviews now instead of shipping electron


It's absolutely not best effort that counts.

I've heard many stories of people overstaying their visa in the US by e.g. one day, by way of a mishap or honest mistake, and subsequentially being denied visas or turned away at border control. The effects of this can go on for years and years... it's basically zero tolerance


Overstaying in the US

Anywhere else, less strict. Still can be problematic yes. And of course depends on the circumstances


Its tru for Schengen visas too. Overstaying a day because of a cancelled flight is enough to deny future visas, they are very strict. It depends on the country you are applying to, and from. There are also exit requirements, like having to leave Schengen from the same country you arrived in.


hate to break it to you, but this isn't accurate


I overstayed my visa by a week in Thailand couple of decades ago because the task I was sent out to perform took longer than expected. I just had to pay a reasonable fine on exit that I then claimed back on my expenses. There was certainly not even a suggestion that I would be unwelcome in Thailand in the future.

Why is the US so awkward?


x86 has a Total Store Order (TSO) memory model, which effectively means (in a mental model where only 1 shared memory operation happens at once and completes before the next) stores are queued but loads can be executed immediately even if stores are queued in the store buffer.

On a single core a load can be served from the store buffer (queue), but other cores can't see those stores yet, which is where all the inconsistencies come from.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: