GPT-4 is here 🚨

What you need to know and how you can get access

Hi folks!👋🏻 This is The Prompt! Seems like everyone woke up yesterday and decided to launch AI stuff.

GPT-4 is finally here and both Google and Anthropic have released APIs for their models too.

We cover everything today 👇🏻

FEATURED

GPT-4: Everything you need to know 🔥

The long-awaited GPT4 is finally here. And we’re excited!

Here’s everything you need to know:

Capabilities

  • Multimodal: Accepts images & text as inputs to generate text & code.

  • Higher reasoning than ChatGPT: It passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%.

  • 25,000 words context: You can fit full documents within a single prompt. (8x more than ChatGPT)

  • More creative & collaborative: generate, edit, and iterate with users on writing tasks.

  • Customizable: You can prescribe your AI’s writing style, for example, “You are a tutor that always responds in the Socratic style.”

  • Improved safety protocols: reduced disallowed content responses by 82% compared to GPT-3.5

GPT-4 outperforms other ML models (PaLm, Chinchilla) and GPT 3.5 in any language. (Details here)

Use-cases

How to try it?

  • You can try GPT4 (text-only) with ChatGPT Plus ($20 a month),

  • You can try GPT4 with images on the BeMyEyes app which is the first partner to integrate and test this option.

API Access & pricing

The API access is not public, but you can get on the waiting list.

If you want to jump the line and get early GPT-4 access you can do it by evaluating OpenAI’s models on Evals. 

The pricing is higher for GPT-4:

  • 8k tokens model: $0.03 per 1k prompt tokens, and $0.06 per 1k tokens.

  • 33k tokens model: $0.06 per 1k prompt tokens, and $0.12 per 1k tokens;

Do we finally know the model size?

Sadly, we know nothing. In their paper, OpenAI doesn’t disclose any details about GPT-4's size, architecture, or technical implementation.

Their reasons: High “competition” in the space.

So, not so “Open” anymore?

Limitations

  • Not fully reliable and might still hallucinate,

  • Still lacks knowledge of events after September 2021,

  • All “jailbreaks” options are not mitigated

Links

TOGETHER WITH CATBIRD

The first multi-model image generator

With just one click, you can generate images using 18 different vision models!

Sounds incredible, right?

All you have to do is provide a prompt, and the platform will whip up images using 18 advanced vision models.

Stable Diffusion, Dalle, and fine-tuned models like Disney style, Orange mix, OpenJourney, and more are supported.

Here's the best part: CatBird is supporting today's issue, and they're giving you the chance to try their app for free!

WHAT ELSE IS GOING ON

🤯 Google released their PaLM API and added AI to everything! Gmail, Docs, Sheets, Slides, Meet - all apps an AI makeover.

👀 Antrhopic releases API for their LLM model Claude. Anthropic is a company founded by ex-OpenAI employees, and with Claude you can do summarization, search, creative and collaborative writing, Q&A, and coding with it.

🪟 Microsoft to share announcements about AI & productivity. They scheduled their event for tomorrow, March 16th.

🚀 Bing was running on GPT-4 this whole time. Prometheus, the model behind Bing that was adapted for search, was powered by GPT-4 all along.

💰Stripe combats fraud using GPT4. Also for support customization, answering questions about support.

RESOURCES

The best resources we came across lately that will help you become better at writing prompts & building AI apps.

TOOLBOX: SPECIAL GPT-4 POWERED EDITION

The latest AI tools to use or get inspiration from.

PROMPT OF THE DAY

TOOL

Midjourney

PROMPT

Nike and Lego collaboration on robots with shoes, photorealistic, wide-angle shot, cinematic lighting, elegant, studio, 8K dynamic::0 --s 750 --uplight 

RESULT

LATEST PAPERS

  • MELON: NeRF with Unposed Images Using Equivalence Class Estimation

  • FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization

  • ViperGPT: Visual Inference via Python Execution for Reasoning

  • Edit-A-Video: Single Video Editing with Object-Aware Consistency

  • Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images

What'd you think of today's edition?

Login or Subscribe to participate in polls.