Good morning everyone! Yesterday, OpenAI released the widely (overly) anticipated "Strawberry" project under the "o1" name.
It's their best-performing model to date but also one of the slowest ones by far, which begs the question: why is it so slow? Is this high latency worthwhile? Is it any good? What does it do? How does it work? (and I had a ton more questions)
I recently posted this tweet...
Funnily enough, chain-of-thought has a lot to do with o1.
Let's answer these questions (according to the information available from OpenAI) in this week's video and better understand why a prompting technique is so relevant here.
Watch the video or read the article version:
And that's it for this iteration! I'm incredibly grateful that the What's AI newsletter is now read by over 20,000 incredible human beings. Click here to share this iteration with a friend if you learned something new!
Looking for more cool AI stuff? 👇
Looking for AI news, code, learning resources, papers, memes, and more? Follow our weekly newsletter at Towards AI!
Looking to connect with other AI enthusiasts? Join the Discord community: Learn AI Together!
Want to share a product, event or course with my AI community? Reply directly to this email, or visit my Passionfroot profile to see my offers.
Thank you for reading, and I wish you a fantastic week! Be sure to have enough sleep and physical activities next week!