Yep! I totally understand the concerns around not being able to share data externally - the library currently supports open source, self-hosted LLMs through huggingface pipelines (https://github.com/refuel-ai/autolabel/blob/main/src/autolab...), and we plan to add more support here for models like llama cpp that can be run without many constrains on hardware
The earlier post was a report summarizing LLM labeling benchmarking results. This post shares the open source library.
Neither is intended to be an ad. Our hope with sharing these is to demonstrate how LLMs can be used for data labeling, and get feedback from the community
>> don't trust that there was no funny business going on in generating the results for this blog
All the datasets and labeling configs used for these experiments are available in our Github repo (https://github.com/refuel-ai/autolabel) as mentioned in the report. Hope these are useful!
Good question - one followup question there is value for who?
If it is to train the LLM that is labeling, then I agree.
If it is to train a smaller downstream model (e.g. finetune a pretrained BERT model) then the value is as good as coming from any human annotator and only a function of label quality
Why retrain that smaller model from scratch tho? Just do a little transfer learning, or get creative and see if you can prune down to a smaller model algorithmically instead of doing the whole label and train rigamarole from scratch on what is effectively regurgitation.
Hmm, I'm not suggesting training a smaller model from scratch - in most cases you'd want to finetune a pretrained model (aka, transfer learning) for your specific usecase/problem domain.
The need for labeled data for any kind of training is a constant though :)
Hi, one of the authors here. Good question! For this benchmarking, we evaluated performance on popular open source text datasets across a few different NLP tasks (details in the report).
For each of these datasets, we specify task guidelines/prompts for the LLM and human annotators, and compare each of their performance against ground truth labels.
You didn't answer the question at all, although to be fair the answer is both obvious and completely undermines your claim so I can see why you wouldn't.
A comprehensive list of GPU options and pricing from cloud vendors. Very useful if you're looking to train or deploy large machine learning/deep learning models.
This upcoming course covers topics such as bootstrapping datasets and labels, model experimentation, model evaluation, deployment and observability.
The format is 4 weeks of project-driven learning with a peer cohort of motivated, interesting learners. It takes about 10 hours total per week including interactive discussion time and project work. First iteration of the course starts July 11th. We are offering a limited number of scholarships for the course (details on the course page)
Autolabel is quite orthogonal to this - it's a library that makes interacting with LLMs very easy for labeling text datasets for NLP tasks.
We are actively looking at integrating function calling into Autolabel though, for improving label quality, and support downstream processing.