Thursday, October 31, 2024

Wednesday, October 30, 2024

Tuesday, October 29, 2024

Monday, October 28, 2024

Sunday, October 27, 2024

Saturday, October 26, 2024

Friday, October 25, 2024

Thursday, October 24, 2024

Wednesday, October 23, 2024

Tuesday, October 22, 2024

Monday, October 21, 2024

Sunday, October 20, 2024

Saturday, October 19, 2024

Friday, October 18, 2024

Thursday, October 17, 2024

Wednesday, October 16, 2024

Tuesday, October 15, 2024

Monday, October 14, 2024

Sunday, October 13, 2024

Saturday, October 12, 2024

Friday, October 11, 2024

Thursday, October 10, 2024

Wednesday, October 9, 2024

New top story on Hacker News: Show HN: FinetuneDB – AI fine-tuning platform to create custom LLMs

Show HN: FinetuneDB – AI fine-tuning platform to create custom LLMs
9 by felix089 | 3 comments on Hacker News.
Hey HN! We’re building FinetuneDB ( https://finetunedb.com/ ), an LLM fine-tuning platform. It enables teams to easily create and manage high-quality datasets, and streamlines the entire workflow from fine-tuning to serving and evaluating models with domain experts. You can check out our docs here: ( https://ift.tt/IfNwB9g ) FinetuneDB exists because creating and managing high-quality datasets is a real bottleneck when fine-tuning LLMs. The quality of your data directly impacts the performance of your fine-tuned models, and existing tools didn’t offer an easy way for teams to build, organize, and iterate on their datasets. We’ve been working closely with our pilot customers, both AI startups and more traditional businesses like a large newspaper, which is fine-tuning models on their articles to automate content generation in their tone of voice. The platform is built with an end-to-end workflow in mind, from dataset building, fine-tuning, serving, and evaluating outputs. The centerpiece is a version-controlled, no-code dataset manager where you can upload existing datasets in JSONL, use production data, or collaborate with domain experts to create high-quality datasets for custom use cases. We also offer evaluation workflows that allow non-technical contributors to annotate data, review model outputs, and refine responses (LLM-as-judge also available). We offer: - A free tier for developers and hobbyists who want to streamline dataset management. - Business-tier with full feature access for teams, using per-seat pricing. - A custom tier for model hosting, custom integrations, and self-hosting. Most users still use OpenAI models, but if you're working with open-source LLMs, we offer pay-as-you-go pricing for serverless inference for Llama and Mistral models with up to €100 in free credits to get started. We're in public beta right now, so any feedback—whether it’s about features, usability, or anything else—would be incredibly valuable. If you've worked on fine-tuning models before or are curious about custom LLMs, we’d love to hear from you. Our goal is to make the fine-tuning process more accessible and help more companies leverage their data and domain experts to create custom LLMs. Thanks for checking it out!

Tuesday, October 8, 2024

Monday, October 7, 2024

Sunday, October 6, 2024

Saturday, October 5, 2024

Friday, October 4, 2024

Thursday, October 3, 2024

Wednesday, October 2, 2024

New top story on Hacker News: Show HN: Kameo – a Rust library for building fault-tolerant, async actors

Show HN: Kameo – a Rust library for building fault-tolerant, async actors
24 by tqwewe | 6 comments on Hacker News.
Hi HN, I’m excited to share Kameo, a lightweight Rust library that helps you build fault-tolerant, distributed, and asynchronous actors. If you're working on distributed systems, microservices, or real-time applications, Kameo offers a simple yet powerful API for handling concurrency, panic recovery, and remote messaging between nodes. Key Features: - Async Rust: Each actor runs as a separate Tokio task, making concurrency management simple. - Remote Messaging: Seamlessly send messages to actors across different nodes. - Supervision and Fault Tolerance: Create self-healing systems with actor hierarchies. - Backpressure Support: Supports bounded and unbounded mpsc messaging. I built Kameo because I wanted a more intuitive, scalable solution for distributed Rust applications. I’d love feedback from the HN community and contributions from anyone interested in Rust and actor-based systems. Check out the project on GitHub: https://ift.tt/acHWJKb Looking forward to hearing your thoughts!

Popular Posts

Recent Posts

Unordered List

Text Widget

Blog Archive

Search This Blog

Powered by Blogger.