• ThePrompt
  • Posts
  • Math: The one thing LLMs struggle with 🧮

Math: The one thing LLMs struggle with 🧮

at least for now

Hi folks! 👋🏻 Are you tired of hearing how amazing AI is at everything from creating art to playing board games better than you?

Well, I got some good news for you: Large Language Models are not that good at math!

Let's see why and how researchers are working on it

Language models are still bad at math

Large language models like GPT3 are very good at generating text, and finishing or rewriting our sentences or code.

Yet, they seem to struggle with math.

The challenge is that large language models (LLMs) are not designed for that purpose.

Current models do not understand the underlying principles of math and are not able to apply that knowledge to solve problems in a flexible way, such as when the problems are presented in different ways.

But with enough "math instruction", models can do better

This paper proved that LLMs (Large Language models) can add up to 19 digits if provided with algorithm details in the prompt itself.

They call it "Algorithmic prompting".It seems like the trick is to describe the algo in enough detail so that there is no room for misinterpretation -- in a similar way to how Stable Diffusion works, but for math!

📚 Educational corner

🛼 Makers corner

    🤓 Tech & Other news

      🎨 Creativity corner

      🧱 What are you building?

      If you're using AI in your work or projects, I'd love to hear about it! Please reply to this email and let us know what you're up to. We may feature your work in our newsletter.

      ❤️ If you like The Prompt, and want to support my work:

      • Share The Prompt with a friend, and invite them to subscribe here.

      • Book an ad in The Prompt ( reply to this email if you’re interested)

      What'd you think of today's edition?

      Login or Subscribe to participate in polls.

      Thank you for reading! ✌🏼