Post di Anthropic

Visualizza la pagina dell’organizzazione di Anthropic, immagine

404.059 follower

You can now fine-tune Claude 3 Haiku—our fastest and most cost-effective model—in Amazon Bedrock: https://lnkd.in/e8NX_F-g. In testing, we fine-tuned Haiku to moderate comments on internet forums. Fine-tuning improved classification accuracy from 81.5% to 99.6% and reduced tokens per query by 89%. Early customers, like SK Telecom, have used fine-tuning to create custom Claude 3 models. These models deliver more effective responses across a range of use cases, from customer support to legal operations. Fine-tuning is currently available for Claude 3 Haiku in preview.

  • An image showing the hierarchy of task-specific performance improvement. From bottom to top: "Base model", "Prompt engineering", and "Fine-tuned", with arrows pointing upwards between each level. The text "Task-specific performance" is displayed vertically alongside the diagram. The background is a textured orange-red color.
Phanindra Parashar

Machine Learning Engineer at Netconomy

3 settimane

Anthropic Out of curiosity Just a small question, how can this reduce 89% tokens per query? Such a massive reduction can happen if earlier methods heavily relied on RAG for context

Even the "Projects" feature is a pretty great step in this direction. It can't handle all of the file types of a customGPT, but for projects where we'd prefer to use Claude, it's nice to be able to dive right in with context pre-loaded. Of course, you have to TELL Claude that it has that data, as it "sees" it just as the origin of the conversation thread (unlike GPT, which knows it has a customize feature), but hey, we have features we couldn't have realistically built without Claude. That's rad.

Aditya Saxena

Building pmfm.ai | Ex-Microsoft/Amazon

3 settimane

RAG is still the best way to custom train the models.

Amazing news! We integrate well with bedrock and provide datasets for fine tuning models . Would be happy to speak with anyone who is interested https://www.superannotate.com

Great work on fine-tuning Claude 3 Haiku! It's amazing to see the impact it's having on classification accuracy and token reduction. Truly inspiring. Keep it up!

Pablo Carmona Contreras

Gestión e Innovación Tecnológica | Soluciones Aplicando Tecnológia | Administración de Instituciones de Salud | Odontólogo Especialista | Foco en Tech

3 settimane

Hi guys, I didn't want to bother you in this way but your inaction forced me to do it. I sent you a support request a couple of days ago, since I had paid for Claude Pro until JUL 28 and when I went to request the API to work, the status of my account changed and I lost the previous prompts and my subscription. I have tried to contact support on their Help Center page without success. I really need help, and still no response from support Thanks

Congratulations Anthropic team! Can't wait to start using this.

This is a real leap. Really impressive accuracy boost and efficiency gains mot reduce the apprehension companies are feeling letting these tools support client facing operations. It's great to see Anthropic making advanced AI more accessible and tailored to specific use cases.

Impressive results! Fine-tuning Claude 3 Haiku demonstrates the transformative potential of customized AI models. Looking forward to exploring how Claude 3 Haiku can further optimize our AI-driven initiatives.

Claude 3 Haiku's fine-tuning capabilities sound truly impressive. Exciting times ahead for custom model creation and tailored responses!

Vedi altri commenti

Per visualizzare o aggiungere un commento, accedi