My advice is a little different. It’s make the life of your boss insanely easy. Similar in nature to post but slightly different optimization function. Don’t over communicate, communicate just the right amount. Anticipate questions. Don’t create any friction for them and be really helpful. Some of my people will anticipate things and be proactive. I love that and I constantly push to get them promoted.
Ive adopted this mindset recently and it really does work. That being said i feel it turns me into a bit of a “yes man”. I wish there was more room for more of my authentic personality
When I read the publications (the ACM magazine), I swear sometimes the content feels LLM generated. Does anyone else get that impression? In general, I'm not very impressed with the content (I'm used to WIRED, btw).
The way I think of it (might be wrong) but basically a model that has similar sensors to humans (eyes, ears) and has action-oriented outputs with some objective function (a goal to optimize against). I think autopilot is the closest to world models in that they have eyes, they have ability to interact with the world (go different directions) and see the response.
This is a very smart idea. I couldn't turn my Ring Alarm off and I was on the same Wifi connection as the system. In retrospect, it would be quite smart to switch over to local network.
Exactly. This was my point. Televisions can upconvert from 720p to 4k. In the same sense, the machine learning model would fill in the waveform and mimic a high powered mic. It can do this at the connection point (iPhone / computer).
Televisions have considerably more temporal data to work with than an audio stream does. It's very easy to hack together interpolated images, not so easy to predict/denoise/upres time-series audio information.
Past a certain point it's probably easier/more efficient to use the Airpods as a speech-to-text mic and then infer a "high quality" text-to-speech version on your connected device.
Isn’t there a whole bunch of dependency here related to prompting and methodology that would significantly impact overall performance? My gut instinct is that there are many many ways to architect this around the LLMs and each might yield different levels of accuracy. What do others think?
Edit: In reading more, I guess this is meant to be a dumb benchmark to monitor through time. Maybe that’s the aim here instead of viability as an auto close tool.
Agree. Also, with respect to training, what is the goal that we are maximizing? LLMs are easy, predicting the next word and we have lots of training data. But what are we training for in real world? Modeling the next spatial photograph to predict things that will happen next? It’s not intuitive to me what that objective function would be in spatial intelligence.
100% agree. There is no reason for employees to be loyal to a company. LLM building is not some religious work. It’s machine learning on big data. Always do what is best for you because companies don’t act like loyal humans, they act like large organizations that aren’t always fair or rationale or logical in their decisions.
To a lot of tech leadership, it is. The belief in AGI as a savior figure is a driving motivator. Just listen to how Altman, Thiel or Musk talk about it.
That’s how they talk about it publicly. Internally I can attest that the companies for two of the three you list are not like that internally at all. It’s all marketing, outwardly focused.
"Tech founders" for whom the "technology" part is the thing always getting in the way of the "just the money and buzzwords" part.
Now they think they can automate it away.
25+ years in this industry and I still find it striking how different the perspective between the "money" side and the "engineering" side is... on the same products/companies/ideas.
> Just listen to how Altman, Thiel or Musk talk about it.
It’s surprising how little they seem to have thought it through. AGI is unlikely to appear in the next 25 years, but even if, as a mental exercise, you accept it might happen, it reveals it's paradox: If AGI is possible, it destroys its own value as a defensible business asset.
Like electricity, nuclear weapons, or space travel , once the blueprint exists, others will follow. And once multiple AGIs exist, each will be capable of rediscovering and accelerating every scientific and technological advancement.
The prevailing idea seems to be that the first company to achieve superintelligence will be able to leverage it into a permanent advantage via exponential self improvement, etc.
> able to leverage it into a permanent advantage via exponential self improvement
Their fantasies of dominating others, through some modern day Elysium, reveal far more about their substance intake than rational grasp of where they actually stand... :-)
Tech leadership always treats new ventures or fields that way, because being seen to treat it that way and selling the idea of treating it that way is how you attract people (employees, and if you are very lucky investors, too) that are willing to sacrifice their own rational material interests to advancing what they see as the shared religious goal (which is, in fact, the tech leader’s actual material interest.)
I mean, even on HN, which is clearly a startup-friendly forum, that tendency among startup leaders has been noted and mocked repeatedly.