Aug 27 • 11M

Exploring the Latest in AI: TextFX, Visualizing AI, JamBot, and OpenAI's Fine-Tuning Update

 
0:00
-10:35
Open in playerListen on);

Appears in this episode

John Siwicki
Our podcast where we go deep on the current topics
Episode details
Comments
Transcript

John Siwicki discusses several interesting topics in this episode. He starts by introducing the Text FX project from Google, which offers AI-powered tools for rappers, writers, and wordsmiths. He explores the different tools available, such as Simile Explode, Unexpected Alliteration, and Acronym Fuse Scene Unfold.

Next, he mentions a project called "Visualizing AI" by Google's DeepMind. This project showcases artwork and animations created by artists based on AI. The visuals are captivating and provide a unique perspective on AI.

John then moves on to discuss a new plugin called Jambot by Figma. Jambot brings Chat GPT inside Figma's whiteboard software, allowing users to utilize AI-generated responses during ideation and collaboration sessions.

He also highlights the launch of Code Llama by Meta Facebook, a model specifically designed for code writing. The model is open-sourced, and Perpexity Labs provides a playground for users to experiment with it.

Lastly, John shares the big news of the week: OpenAI's update to their API, which now allows fine-tuning of Chat GPT Three and a Half Turbo. Fine-tuning enables users to train the model on their own data, resulting in higher quality results, shorter prompts, lower costs, and lower latency requests.

Key Takeaways:

  1. Google's Text FX project offers AI-powered tools for rappers, writers, and wordsmiths.

  2. "Visualizing AI" by Google's DeepMind showcases artwork and animations inspired by AI.

  3. Figma's Jambot plugin brings Chat GPT inside their whiteboard software for collaborative ideation.

  4. Meta Facebook launched Code Llama, a model tuned for code writing, which is open-sourced and available for experimentation.

  5. OpenAI's API update allows fine-tuning of Chat GPT Three and a Half Turbo, resulting in higher quality results, lower costs, and lower latency requests.

Links: