sure, just starting to get some up on HF. A good example might be GSM8K as this shows the structured output where every result is strictly formatted - I am using this right now to train models and managaing to get a small qwen model up in the 60% range, which wildly is higher then llama2 and xAI Grok 1
Very good, and even better with the new DAG approach - we have been using great-expectations to bench and seeing very good diversity and low amounts of duplication - you check out one of the recent CoT examples here: https://huggingface.co/datasets/lukehinds/deepfabric-devops-...
This dataset disappeared. Did it move or get pulled for some reason? (glanced at it when you noted this and went back today to check it out and found a 404...)
Ah right, kiln - Deepfabric was originally named promptwright , and I can see kiln has copied over some of our code and used it for its synth-gen (which is a nice compliment!)
We are actually planning on moving to graphs now, which we are seeing better results with over trees, check it out if you also want to use them in kiln - but you might want to wait until we validate a little more and lift it out of experimental.
I think the key difference between the two since kiln adopted the same approach is the ability to generate reasoning / chain of thought and export to alpaca, chatml, etc - along with direct to unsloth.ai's formatting. I doubt we will have UI as its for running on backend systems and part of an ML pipeline along with being a library / SDK.
I personally wrote Kiln's SDG code myself -- no code was copied from here or anywhere else. Not sure where that claim is coming from, but it's not accurate.
I might have taken some of the prompts and modified them. I didn't recognize the new name, do recognize the old one.
Edit:
- just confirmed. No code copied. Prompts were originally from the Pluto library, then modified by the library above, then modified again by me for Kiln.
- And just to clarify, Kiln has had supported for chain of thought, reasoning, and all major export formats (ChatML/Unsloth/OpenAI/Hugging Face). Plus API integrations with Together, Fireworks, OpenAI, Google Vertex.
People should try both. I just want to clear on the origins of the code/prompts, and the feature set.
# The contents of this file are adapted from the promptwrite library (https://github.com/StacklokLabs/promptwright),
# which was adapted from the pluto library (https://github.com/redotvideo/pluto).
I read the code. I also remember writing the code and that comment.
As disclosed: some prompt strings were taken and modified, but none of the code was. The original strings are using a templating library that we don't support, so their code/strings wouldn't have worked in our codebase, nor would the wrapping code. Those interfaces/LOC are all unique. It's possible for some "content" to be taken (partial prompt strings), but zero code, and the statement "copied over some of our code and used it" to be incorrect.
Not trying to make a big deal of this, just clarifying these are separate libraries, with no shared code. Looks like the author saw the comment and assumed we used code (vs prompts); not a big deal, but not the case. Their work is super cool, and did inspire parts of my project.
Also worth noting, the library Pluto originated this prompt (as far as I know), and it's been tweaked/evolved many times over.
Hey There, this thread is getting derailed. Could you please create a separate post for your project and we let this one be for discussion of deepfabric, thanks!
Agreed, and sorry about that. Maybe edit the incorrect comment about "I can see kiln has copied over some of our code" for clarity. I get it was probably honest mistake, but hard not to reply when people are claiming I copied something I didn't. Great project, people go check out deepfabric!
We are an innovative startup founded by the creators of Kubernetes, Sigstore and the folks who bootstrapped foundations such as the CNCF and OpenSSF.
Our mission is to revolutionize the software industry by providing a secure and trustworthy software supply chain. With our deep expertise in open-source technologies and commitment to enhancing software security, we are seeking a highly skilled and motivated individuals in multiple roles.
* Senior FrontEnd Engineer
* Senior Site Reliability Engineer
* Staff Product Manager
* Staff Security Software Engineer
* Staff Site Reliability Engineer
* Staff Software Engineer - Core Platforms and OSS
Yep, currently gpt-3.5-16k or gpt-4. We wrote the example prompts in a relatively Llama-compatible way though (we actually started building this onto Llama 1 before switching to OpenAI as default), and make few assumptions about the LLM so it's easy to switch out. Mostly this is waiting behind us adding an option to pass in any LLM, and we're planning to add support for this.
Generally, we leave the LLM up to the user -- if OpenAI or Google is a no-go, then you probably are anyway in the territory of self-hosting or even self-training your LLM, which means you're fine setting up your own inference endpoints as well.
I view rich and non-rich people equally. Many are moral, many others are corruptible. Among the latter, it’s harder (more expensive) to corrupt a non-rich person
https://github.com/lukehinds/deepfabric/discussions/334