Show HN: Open-Source Alternative to OpenAI Platform, for Local Models

github.com

83 points by aliasaria a day ago

Hi everyone, we’re a small team, supported by Mozilla, who are working on re-imagining a UI for training, tuning and testing local LLMs. Everything is open source. If you’ve been training your own LLMs or have always wanted to, we’d love for you to play with the tool and give feedback on what the future development experience for LLM engineering could look like.

debadyutirc a day ago

Really cool project! Look forward to trying this out. I have been using the copilot extensions with local docs toRAG augmentation. This seems to be a step up.

ktko a day ago

I tried this out last night, and the UI is so intuitive! Have tested a few other tools in the space, and this by far, was the most user-friendly and made with developers in mind :)

Excited for more features to support enterprise functionality and deployment, but overall impressed with what the team has built so far. Well done!

wooderson_iv a day ago

Who is this for? I get it for hobbyists, but do you imagine it being used for...startups that...what, self-host AI? Where's support for the rest of OpenAI functionality? Where's assistants?

  • aliasaria a day ago

    We want this tool to be for everyone that wants to train and tune their own LLMs.

    Our vision is that the sphere of people who do these types of tasks will grow as LLMs become more accessible and the tooling becomes more reliable.

    Let us know what other types of functionality you'd like to see. I will record Assistant-type functionality for our team to look into!

  • dadmobile a day ago

    We're a small team and just getting started, but we do want to cover all of the workflows you expect from a platform like this. We have simple tool-use functionality and some models that work well with it in the gallery. We'll expand on that based on user feedback so let us know if you have thoughts on which things are most important!

mbcool a day ago

Runs great on Apple Silicon. Impressed with one-click downloads and its flexible fine-tuning suite.

  • aliasaria a day ago

    Apple MLX is a game changer for what is possible in Local LLM development for everyone. Getting this to work as a single application that "just works" across platforms has been one of the hardest engineering problems we've ever worked on, but we're determined to get it right.

misslivirose a day ago

The plugin for doing RAG and being able to quickly test different parameters for chunking made it really easy to see how I could make improvements to my local embeddings workflow. Already seeing better results.

anonymous_llama a day ago

This looks pretty cool! Does it support the new Deepseek model as well?

  • dadmobile a day ago

    Yes! The easiest way to run this locally is to use one of the distilled models (you download one from the Model Zoo or enter any huggingface ID at the bottom of the page). If you are on a mac, the MLX versions work great, and of course GGUF if you want a quantized model or don't have a GPU.

cvanvlack a day ago

Love that you can pull in other open source plugins. Super cool!

  • aliasaria a day ago

    If you know any we should incorporate, let us know! We really want to bring the best of open source LLM tools together in one UI.

abhimazu1006 a day ago

I tried the tool, works flawlessly on mac. Finetuning was quick, do you support model conversion as well ? To TensorRT ?

  • aliasaria a day ago

    The functionality in Transformer Lab comes from plugins. Plugins are just Python scripts behind the scenes. So anything that can be done in Python can be done as a plugin.

    Right now we have export plugins for going to GGUF, MLX and LlamaFile but if you know a good library for exporting to TensorRT, let's make a plugin for this! (Feel free to join our Discord if you want help)

dominiclau18 a day ago

Much needed product today - awesome product and thanks for sharing this with us!

sacckey a day ago

Such a cool GUI!