Sure, I don’t think the meaning of “artificial” is in question. Presumably the reason thomastjeffery said that there is no such thing as artificial intelligence, was not because he thought the threshold for it being artificial, had not been reached.
Indeed, that(*) is what I was implying. Therefore, my initial question was, "what is it that you mean by 'intelligence' when you say that there is no such thing as 'artificial intelligence'?”
I had thought the fact that this is what I meant, would have been clear in my initial reply. I didn’t expect this conversation to go in the direction of clarifying that.
The technologies we have are models, not actors. They are each the static result of a process that determines boundaries and relationships between pieces of data. These models do not, however, organize or label the boundaries or relationships themselves.
For example, you can model a dataset of human-written text into an LLM. Using that LLM, you can transform a human-written prompt into a "continuation" that incorporates the modeled dataset. The resulting continuation will contain a new organization of both text from your prompt and/or text from the dataset: nothing more.
The boundaries and relationships modeled by an LLM are not categorized. The model does not contain any objective observation about its data. It only provides a structure that is intended to "align to" (simulate) the already-present semantics of natural language. It was not the model's intention to align with language semantics: that comes from the authors of the model, and from the presence of patterns (we recognize as language semantics) in the dataset itself. Without the presence of natural language patterns, there would be no boundary to align to in the first place.
To contrast, a human can read text into ideas, then think objectively about those ideas, produce new ideas, and finally express those ideas into text by structuring them into language semantics. Nothing like that happens in any software I am aware of.
While I'm looking forward to this experiment, it feels so much like all what adults (many of them) are forbidding to children, thinking it will be better off preserving their innocence, until they turn 18. They've suddenly access to cars, alcohol, sex and more, and when mixed together, especially the alcohol part, after 18 years of obedience, this sometimes doesn't end up very well. On another hand, some parents educate their children drinking small amount of wine and/or beer - is that worse?
Thing is, in my experience I have been able to learn to tolerate someone after spending enough time with them.
It's possible I might learn to tolerate this potential cofounder as well, if I spend more time with him.
It's certainly not my first choice but he and I have nearly identical interests in terms of the startup we want to build, as well as perfectly complementing skillsets.
You make a good point. But that's something I'd iron out before co-founding together. Co-founding is like getting married. Essentially you're saying "yeah, she looks like an alien dissected a train wreck and put it back together inside out, but she's really smart."
Some people are okay with that. Some aren't. Are you?
And what good are experiences and interests if your visions aren't in line with each other, or you end up just never liking the guy? I mean, if you can find things to like and get along- that's great! But the fact that you're asking this question to begin with is troubling in itself. You shouldn't have to ask, IMHO. You should just know. And, I suspect you already do.
I agree that if I end up not liking him (as a person) at all, then it won't work out.
I was planning on setting up like a 3-month part-time working relationship where we can work together on some side project to get to know each other better.
A 3-month work commitment is not much, plus if I go into it and 1-month later I decide things dont work out, I can just leave.
But I'm wondering if it's worth investing time for this 3-month work arrangement at all
Honestly from what you're saying, I'd pass. Time is too precious. I mean, if you think you might have a Jamie and Adam type of relationship (mythbusters) where you don't always see eye to eye but you always do amazing work together, that's great. But if you don't see that now, and have that professional respect already, then I dunno.
Have you discussed this with him? And if not, why not? You'll be tackling much bigger issues together in the future, so tackle this one. What does he think? It might be as simple as saying "You know, we have a lot in common and I think we could work together, but our personalities just don't seem to click. I'm concerned about working together long term. What do you think?"
I appreciate the feedback. You're probably right but it gives us something to dream about. But assuming we do have this cap table, how will investors want to purchase preferred shares in the company?
I notice this trend on hacker news where there are so many pessimistic comments. It's like an ocean of pessimist people.
x is dying, y is dying.
FB is not dying. It is literally a money printing machine and will continue to be for many years.
Instagram, despite being over a decade old (very old in the internet world), continues to grow [quarter over quarter](https://i.gyazo.com/aee60a22bcc2757f446340c51a17b3bc.png). Where in that photo do you come to the conclusion "Instagram is slowly dying"?
Metaverse has not been rejected. If you're like me and follow its progress, you can see making significant improvements year over year but it is still in the early adopter phase. Like it or not, humans want more visceral experiences and putting yourself in a 3D virtual environment is a lot more visceral than staring at a 2D screen. Only a tech boomer would say metaverse is rejected when it's still in its infancy.
IBM is a "tech" company that employs 282,000 employees, and when was the last time they invented something? I don't remember the last time I heard IBM in the news about something they made.
The bigger the company, you often times find less innovation and more administration & bureaucracy.
The reason startups can survive is because of its small size that makes it very flexible and adaptable to chaos and change, that gives it the edge over bigger companies.
the power law applies to any big organization. 20% of the people do 80% of the work, whilst 80% of the people are just there for "support".
whatsapp was run by a team of like 20 people or something when they got acquired for $20 billion. for a simple software product, you don't really need that many people. in fact, more people often means bad software. you just need a small group of very talented engineers to run the product and add new features when necessary.
big (and especially public) companies often times need to hire a lot, just to look like a real company.
now that twitter is private, elon has no responsibility to public investors and can focus less on looking like a real company and more on doing what needs to be done to cut bloat/costs and improve product
Artificial intelligence is intelligence created by humans. Artificial means man-made.