Different accounts doing the submitting- could be the same person, but might not be too. Also, HN guidelines don't really discourage duplicate submissions, and in some cases, HN moderators are known to encourage re-submission.
Why are you upset that the author resubmitted? The last posts got no traction at all.
A post getting two submissions twice does not make this a re-post. Nobody on earth saw the previous submissions.
It's permissible for folks to submit posts a few times until they actually get noticed or dang steps in to tell them to stop. The author is doing things exactly as the rules permit.
If you've learned from your wife's social media career, you know that you often have to keep trying in order to pick up traction. The algorithm requires signal.
"Arc Max uses OpenAI’s API Platform and Anthropic’s Claude for commercial applications. These features are off by default in the desktop app and not available in the mobile app at this time."
which include in some cases: the whole content of website, URL, etc...
I kicked off Firefox's local translation project last year with my engineering partner Andre (who led Mozilla's efforts with Bergamot and the folks in Europe) and those models are about 30MB per language pair (English is a pivot language so to get from, say French to Spanish, you need two models, the FR/EN and SP/EN) and that works out OK for folks on good connections who rarely encounter a new language and can benefit from the already downloaded models.
I'm not so sure how well it'd work for users if the model was 10X or 100X that size. What do you all think?
Yikes, I immediately thought of exactly the same thing, sending all that data to third parties? no thank you my browser should be my safe space not leaking things itself
Wow, I would've thought these were run on a local model. This sounds like it could get expensive for them quickly, and if I understand correctly these features are available for free? I guess it's only a matter of time until they start charging for them.
EDIT: Per a comment in another thread, these features are only free for 90 days it seems.
AWWWW... It would be great to share some "meta-data" on the usecase - i wonder whether it is narrow task (that can be done via CRON job), or something that are more general and can be creative and not in control sometimes.
What kind of tools that the agent use? (Web-search/ Generating content/...)
Did you made any modification to the code or use it "as is". if so- how much time did you spend on the modification and in what kind of areas...
Feel free to answer only subset of the questions that you feel comfortable with them.
Or just questions about three or four wildly different fields of science, sports and culture you happen to have more than a layman's understanding. If the answers are somewhat plausible, it's probably a model. Or your life partner.
In this thought-provoking post, I delve into the intriguing connection between AI-power control and the 51% attack. With the emergence of regulatory discussions surrounding AI and the potential risks it poses, it becomes crucial to explore the parallels between these concerns and the concept of the 51% attack in cryptocurrency. By examining the concentration of power in AI models and its implications, we gain valuable insights into the need for thoughtful regulation and its impact on innovation.
I think we should call it GEO - GPT Engine Optimization. :)
Furthermore, the article lack of how to tackle those shifts with depth analysis.
For example: How can one put their info to be presented by a GPT/GPT-plugins.
Is there any optimization for LLM to run on RTX cards? 40XX,30XX
I found out tha LLAMA.CPP is nice but I want to take advantage of my graphic cards also, and didn't found any documentations...
No it abuses security vulnerabilities in 3rd party businesses who are using OpenAI. It doesn't get you access to OpenAI's api at OpenAI's expense. It gets you access at [vulnerable 3rd party]'s expense. Bankrupting someone using OpenAI doesn't seem to achieve much in the way of democratization of AI tools, sorry.
it's not bankrupting them, as the author is highly ethic (by using only "Big companies" open apis and remove the small one and the ones that ask from him to be removed).
But as for your comment, i see it rather as opportunity to make it only with Opt-in by the companies themselves.
That way it will actually make it even win-win situation for them for Marketing and Ads (with lower price).
Only stealing from people who haven't asked you nicely to stop doesn't scream "highly ethical" to me
Security researchers put a lot of emphasis on responsibly disclosing vulnerabilities. The maintainers of this project could have easily done the same, but they didn't
Of course! If this was opt-in then the only problem would be between OpenAI and the service providers to decide whether that's an allowable user of OpenAI's apis based on the terms of service and whatnot.