- ThePrompt
- Posts
- Stable Diffusion V2 ❤️🔥
Stable Diffusion V2 ❤️🔥
Free OpenAI+Bubble tutorial, "Flying" through photos, educational content & more
Hi folks!✌🏼
🎁 Special something in this newsletter: Marina created a very detailed tutorial on how to use Bubble with OpenAI's models like GPT3/DALLE.
With this, you can easily create your own AI Assistant, Image generator, and many other things, using just APIs and Bubble, so no code is needed!
She is giving it all away for free: Link to tutorial.
And now, back to our regularly scheduled programming 👇🏻
Stable Diffusion V2 is here💃🏻
The Stability team released Stable Diffusion V2 which uses a new LAION database that greatly improves the quality of the generated images compared to earlier V1 releases.
They also introduced upscaling which improves the quality of the generated photo x4 👇🏻
+ they added a Depth-to-Image diffusion model that can take an image and generate new images based on the depth information in the image👇🏻
More details about the model are on their GitHub, and if you want to try the largest model visit this Huggingface space.
Who's excited for this? I know I am!
PS: We might learn to prompt again: "trending on artstation" and other keywords don't work with this version 😅 Read Emad's thread on the changes.
DiffDreamer 🛌
The DiffDreamer model can generate new views of a scene by starting from an existing image and then "flying" through the scene to create a new view.
This can be really useful for generating views of natural scenes, like mountains or forests, that are difficult to photograph from every angle. Their code is coming soon!
📚 Educational corner
Thanks to your suggestions, from now on I'll include educational content as well!
"Getting started with Diffusers" - This Colab notebook showcases an end-to-end example of usage for diffusion models, schedulers and pipelines
"Training with Diffusers" - This Colab notebook summarizes diffusion models training methods in a step-by-step appraoch.
The official Hugging-face course - Learn natural language processing (NLP) using libraries from the Hugging-Face ecosystem. Free + no ads.
🛼 Makers corner
Harvey raised 5 million from OpenAI to help "copilot for lawyers"
Lisha launched PixelVibe - a stock photography platform with more than 100k AI-generated photos
Pierre launched an MVP for An AI Wingman to help his buddies send better texts on Tinder
🤓 Tech corner
RAD-NeRF - a new model that creates a 3D portrait of a person from a short audio clip.
EDGE is a new AI model that can create realistic dances that match any piece of music. See some robots dancing on this link.
Videogenic - a new model that extracts the most photogenic moments from a video.
Spin-NeRF - state-of-the-art performance against older models in segmenting and in-painting a photo by using only one training image
Magic3D can create 3d models from text inputs. It only takes 40 minutes to create them, which is much faster than other methods that can take up to 1.5 hours. + they offer editing capabilities via text, which gives more power to people to creatively utilize this model.
🎨 Creativity corner
🧱 What are you building?
If you're using AI in your work or projects, I'd love to hear about it! Please reply to this email and let us know what you're up to. We may feature your work in our newsletter.
❤️ If you like The Prompt, and want to support my work:
Share The Prompt with a friend, and invite them to subscribe here.
Book an ad in The Prompt ( reply to this email if you’re interested)
What'd you think of today's edition? |
Thank you for reading! ✌🏼