Hacker Newsnew | past | comments | ask | show | jobs | submit | BUFU's commentslogin

Thanks for the clarification. I think if they disallow first parties to get medical and legal advice, it will do more harm than good.

Make card payment available to local AI agents. How would this sound?


The open source models are no longer catching up. They are leading now.


It has been like that for a while now. At least since Deepseek R1.


This is the best title I've seen in a while.


Would it be possible that other people posted content of Harry Potter book online and the model developer scrape that information? Would the model developer be at fault in this scenario?


I think this is good question. At least for LLMs in general. However we know that Meta used pirated torrents.


Saw this on reddit. Super impressive: https://www.reddit.com/r/LocalLLaMA/comments/1k4lmil/a_new_t...

Local NotebookLM Audio Overview coming soon?


Welcome!


Will llama.cpp be the go-to local inference framework for every device?


On a 2024 Mac Mini M4 Pro, Qwen2-Audio-7B-Instruct running on Transformers achieves an average decoding speed of 6.38 tokens/second, while OmniAudio-2.6B through Nexa SDK reaches 35.23 tokens/second in FP16 GGUF version and 66 tokens/second in Q4_K_M quantized GGUF version - delivering 5.5x to 10.3x faster performance on consumer hardware.

Blogs for more details: https://nexa.ai/blogs/OmniAudio-2.6B

HuggingFace Repo: https://huggingface.co/NexaAIDev/OmniAudio-2.6B

Run locally: https://huggingface.co/NexaAIDev/OmniAudio-2.6B#how-to-use-o...

Interactive Demo: https://huggingface.co/spaces/NexaAIDev/omni-audio-demo


This is a crazy thought lol


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: