Hacker Newsnew | past | comments | ask | show | jobs | submit | TheObviousOne's commentslogin

Beside the spam issue...Does the repo legit, though?


[flagged]


Different accounts doing the submitting- could be the same person, but might not be too. Also, HN guidelines don't really discourage duplicate submissions, and in some cases, HN moderators are known to encourage re-submission.


Why are you upset that the author resubmitted? The last posts got no traction at all.

A post getting two submissions twice does not make this a re-post. Nobody on earth saw the previous submissions.

It's permissible for folks to submit posts a few times until they actually get noticed or dang steps in to tell them to stop. The author is doing things exactly as the rules permit.

If you've learned from your wife's social media career, you know that you often have to keep trying in order to pick up traction. The algorithm requires signal.


Just read their privacy....

"Arc Max uses OpenAI’s API Platform and Anthropic’s Claude for commercial applications. These features are off by default in the desktop app and not available in the mobile app at this time."

which include in some cases: the whole content of website, URL, etc...

https://arc.net/privacy

NO THANKS!

CALLING OUT TO INTEGRATE THOSE FEATURES WITH LLAMA2.0 LOCALLY WITHIN FOSS CHROMINUM/ FireFOX


I kicked off Firefox's local translation project last year with my engineering partner Andre (who led Mozilla's efforts with Bergamot and the folks in Europe) and those models are about 30MB per language pair (English is a pivot language so to get from, say French to Spanish, you need two models, the FR/EN and SP/EN) and that works out OK for folks on good connections who rarely encounter a new language and can benefit from the already downloaded models.

I'm not so sure how well it'd work for users if the model was 10X or 100X that size. What do you all think?


Yikes, I immediately thought of exactly the same thing, sending all that data to third parties? no thank you my browser should be my safe space not leaking things itself


Wow, I would've thought these were run on a local model. This sounds like it could get expensive for them quickly, and if I understand correctly these features are available for free? I guess it's only a matter of time until they start charging for them.

EDIT: Per a comment in another thread, these features are only free for 90 days it seems.


The enshitification cycle is moving fast these days.


Link: https://twitter.com/MetaAsAService/status/170679883460343414...

(Just a demonstration to prove that it train on human and understand location - later prompt engineering could reveal more)


can you elaborate?


Probably not without violating an NDA. What would you like to know?


AWWWW... It would be great to share some "meta-data" on the usecase - i wonder whether it is narrow task (that can be done via CRON job), or something that are more general and can be creative and not in control sometimes. What kind of tools that the agent use? (Web-search/ Generating content/...) Did you made any modification to the code or use it "as is". if so- how much time did you spend on the modification and in what kind of areas...

Feel free to answer only subset of the questions that you feel comfortable with them.


Interesting tactics might going the other direction.

Asking to generate in a super human capabilities...

write a 65 pages of poem about X...


Or just questions about three or four wildly different fields of science, sports and culture you happen to have more than a layman's understanding. If the answers are somewhat plausible, it's probably a model. Or your life partner.


Or ask two or three historical questions that very few humans would know anything about ... but a well-trained model would.


In this thought-provoking post, I delve into the intriguing connection between AI-power control and the 51% attack. With the emergence of regulatory discussions surrounding AI and the potential risks it poses, it becomes crucial to explore the parallels between these concerns and the concept of the 51% attack in cryptocurrency. By examining the concentration of power in AI models and its implications, we gain valuable insights into the need for thoughtful regulation and its impact on innovation.


I think we should call it GEO - GPT Engine Optimization. :)

Furthermore, the article lack of how to tackle those shifts with depth analysis. For example: How can one put their info to be presented by a GPT/GPT-plugins.


I prefer LLMAO: Large Language Model Appearance Optimisation


Is there a way to make it answers longer answers?


is there any integration with it to langchain?

+

Is there any optimization for LLM to run on RTX cards? 40XX,30XX I found out tha LLAMA.CPP is nice but I want to take advantage of my graphic cards also, and didn't found any documentations...


rllama has an OpenCL version, though I wasn't able to test it.


Let's arrange a donation for the creator of this Repo.

This is gold and crucial for democratization of AI tools.


No it abuses security vulnerabilities in 3rd party businesses who are using OpenAI. It doesn't get you access to OpenAI's api at OpenAI's expense. It gets you access at [vulnerable 3rd party]'s expense. Bankrupting someone using OpenAI doesn't seem to achieve much in the way of democratization of AI tools, sorry.


it's not bankrupting them, as the author is highly ethic (by using only "Big companies" open apis and remove the small one and the ones that ask from him to be removed).

But as for your comment, i see it rather as opportunity to make it only with Opt-in by the companies themselves. That way it will actually make it even win-win situation for them for Marketing and Ads (with lower price).


Only stealing from people who haven't asked you nicely to stop doesn't scream "highly ethical" to me

Security researchers put a lot of emphasis on responsibly disclosing vulnerabilities. The maintainers of this project could have easily done the same, but they didn't


Of course! If this was opt-in then the only problem would be between OpenAI and the service providers to decide whether that's an allowable user of OpenAI's apis based on the terms of service and whatnot.


There is no commonly accepted definition of ethics under which that behavior would be considered ethical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: