Hacker News new | past | comments | ask | show | jobs | submit login
Llama2.c L2E LLM – Multi OS Binary and Unikernel Release (github.com/trholding)
76 points by AMICABoard on Aug 26, 2023 | hide | past | favorite | 29 comments



Have you ever wanted to boot and inference a herd of 1000's of Virtual baby Llama 2 models on big ass enterprise servers? No? Well, now you can! (Almost baremetal)

Also drop the binary portable run.com cosmocc build on any OS and run! Truly portable. (Soon baremetal)

Special Thanks & Credits:

llama2.c - @karpathy

cosmopolitan - @jart

unikraft - @unikraft

Would love to hear your feed back here!


Would love some examples, practical a-z stuff; how does one make a couple of these run on a local (yes, large) server for instance?


It's just the beginning, still optimization and some figuring out to do.

This fork is based on karpathy's llama.c and we try to mirror its progress and add our patches to it to add performance, be binary portable or run as unikernel. However there is a catch, this doesn't currently infer the 7b or bigger meta llama 2 models yet. It's too slow and memory consuming.

My plan is to get to a stage where we can actually infer larger models at a comfortable speed like llama.cpp / ggml does, add GPU acceleration along the way.

Also this doesn't have a web api, I'll be adding that in the next update, then it would actually make sense to deploy it on a server to test it out.

Right now you would have to manually spawn vm instances with qemu like this:

qemu-system-x86_64 -m 256m -accel kvm -kernel L2E_qemu-x86_64

or

qemu-system-x86_64 -m 256m -accel kvm -kernel L2E_qemu-x86_64 -nographic

and that's not very practical especially as there is no web api yet. So see this more as a tech preview - release early release often thing.

Yeah as I'll get time, I'll be adding a build for firecracker and also write instructions to spawn 100's of baby llama 2 kvm qemu / firecracker builds on a powerful server.

Thank you for your interest. As per your suggestion, a comprehensive howto is planned. Feel free to add any issue / wants / suggestions to https://github.com/trholding/llama2.c , I'll address those as I get time.

I'm stuck with bigger IRL projects, but if there is deep interest from the community I'll be sure to spend more time on this.


This and also how to let them talk to each other and follow the conversation after setting up only the initial topic - that would be awesome!


If you add it as an issue, I'll address it in future. I like the idea, but with the current toy size small models it won't be too fun (I did a manual try). Once larger models run at good pace, it would be absolutely cool.


Thanks and love the reference to the wonderful blinking guru meditation we used to see on our Amiga


I grew up with the Amiga 500, it was my first love, how can I forget those times!


Is this similar to llama.cpp? I'm not very versed in this area.


This is a fork of https://github.com/karpathy/llama2.c

karpathy's llama2.c is like llama.cpp but it is written in C and the python training code is available in that same repo. llama2.c's goal is to be a elegant single file C implementation of the inference and an elegant python implementation for training.

His goal is for people to understand how llama 2 and LLM's work, so he keeps it simple and sweet. As the project progresses, so will features and performance improvements added.

Currently it can infer baby (small) Story models trained by Karpathy at a fast pace. It can also infer Meta LLAMA 2 7b models, but at a very slow rate such as 1 token per second.

So currently this can be used for learning or as a tech preview.

Our friendly fork tries to make it portable, performant and more usable (bells and whistles) over time. Since we mirror upstream closely, the inference capabilities of our fork is similar but slightly faster if compiled with acceleration. What we try to do different is that we try to make this bootable (not there yet) and portable. Right now you can get binary portablity - use the same run.com on any x86_64 machine running on any OS, it will work (possible due to cosmopolitan toolchain). The other part that works is unikernels - boot this as unikernel in VM's (possible due unikraft unikernel & toolchain).

See our fork currently as a release early and release often toy tech demo. We plan to build it out into a useful product.


Inspired by, but no shared code as far as I can see on a brief scan.


It is not inspired by, it's friendly fork, so its more than inspiration. Code has diverged a bit, yes I do try to share. Karpathy's llama2.c upstream project has clearly stated goals of elegance and simplicity, so code that has lot of pre-processor directives like ours won't help upstream. Apart from this we are very thankful to jart & contributors from the cosmopolitan project and also the unikraft folks without which binary portability or unikernels wouldn't have been possible.

TL;DR Simple elegant stuff that we added will be shared to upstream. Complex non elegant code won't be shared as it won't be accepted upstream.


That's your relationship to upstream llama2.c, which is fairly clear. politelemon was asking about the relationship to ggerganov's llama.cpp, which seems to have inspired the upstream llama2.c.


Does it offer a local api so i can embed this in my python build?


If you add an issue you can do so on https://github.com/trholding/llama2.c

I have a planned a web api. Are you looking for python binding? I'm interested to know. The issues would keep me organized.


The ability to hook it into frameworks like streamlit would be huge. Sorta like llama-cpp-python


Agreed and yes would be awesome... But currently this does not infer large meta 7b model efficiently - like 1 token or lower per seconds. But the small toy story (not so useful) models are fast.

If the above mentioned API / python binding is ready, I'll make a streamlit interface demo. The streamlit demo should be simple. But I have to figure out python binding.


not yet


"serving as an information gateway for students without constant reliance on the internet." But llms without knowledge augmentation are terrible at this! How can students know what they read is true?


What you said is true, its something we would have to figure out on the way.

What I have in mind is -

1. Topic specialized models which are frequently updated maybe every month or two.

2. Fact Checking & Moderation specialized, models which moderate or do fact checking on other model's output.

Kind of a chicken and egg problem. But I believe on the way we will be able to minimize the effects of hallucinations through output validation (both neural and rule based).


Pretty reasonable, I suggest you mention something along these lines to avoid further misconceptions


Will be updated with the next commit.


An aside, any idea why this comment is being downvoted? Seems like a productive exchange.


Pressing the up arrow beside the comments is upvote right? I hope this doesn't have some inverted logic.


No idea. I have just upvoted your comment.


I figured it wasn't you :), anyway cheers buddy!


I have updated the description and added the requested clarification.

To avoid trolls and invisible downvotes, you can always use github issues. You are welcome there.

Thanks for the initial question. Ultimately it seems that there must always be a human in the loop somewhere.


How can students know what the teacher tells them is true?


How can students ever know the truth when history is written by the victors? :)

While similar, I believe revisionist history is different than llm hallucination, but hard to say which is more dangerous. I guess it depends on the content. I suspect an llm won't jail you for questioning it, just happily hallucinate an apology and new answer :)


That's something we would have to figure out on the way - especially due to hallucinations. See the post under this for ideas on how I would try to solve that.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: