
Nice summary of what I think is the most interesting way to use GPT at the moment: in an iterative loop, in conversation with other tools. It’s wild that this works. twitter.com/intrcnnctd/sta…
1/7 Spent the week-end with ControlNet, a new approach to have precise, fine-grained control over image generation with diffusion models. It's huge step forward and will change a number of industries. Here is an example. pic.twitter.com/9iJ9G8H50m

Eleven Labs has the most realistic AI text-to-voice platform I’ve seen. beta.elevenlabs.io (free to try) It’s 99% perfect. Generates great inflection, cadence, and natural pauses. Sample: pic.twitter.com/nf0agi4QTK
After tons of research and experimentation, here are the 6 types of information I provide in my ChatGPT mega-prompts: pic.twitter.com/dQbcAUQ0dy

I've launched a new website 🥳 It's a resource of ai-powered tools & services. Check it out at allthingsai.com pic.twitter.com/ZFOHs5RXMf
I wrote a guide, 'Fine tuning StableDiffusion v2.0 with DreamBooth' cc: @EMostaque @StabilityAI @NerdyRodent @KaliYuga_ai @amli_art @ykilcher @StableDiffusion @DiffusionPics @hashnode @sharifshameem @bl_artcult #stablediffusion2 #stablediffusion dushyantmin.com/fine-tuning-st…
We are excited to announce the release of Stable Diffusion Version 2! Stable Diffusion V1 changed the nature of open source AI & spawned hundreds of other innovations all over the world. We hope V2 also provides many new possibilities! Link → stability.ai/blog/stable-di… pic.twitter.com/z0yu3FDWB5

hyperpersonalized AI models are pretty exciting you can easily fine tune stablediffusion to consistently create multitudes of any sort of niche thing via dreambooth et al i wonder how many sufficiently distinct vibe-tunes "exist"? enjoying my new weird-synthstruments model: pic.twitter.com/eTCMobIKwl

Hello world! We’re launching our first product, MultiFlow. flow.multi.tech MultiFlow makes it easy to create, deploy, and rapidly iterate on generative AI workflows. pic.twitter.com/wKHpUZcoGv

shut. the. front. door. This is insane. A system to decode visual stimuli from brain recordings. So you're saying...in ten years...it'll be thought to image? link to paper🧵 #ai #deeplearning #mindblown pic.twitter.com/EjayPOIYrL


- Totally mind bending. There’s a mention in the thread from the study a few years ago which I remember seeing, but the images were barely legible. the improvement since then is crazy, and if it continues you can see it getting close to perfect replication
🪐 Introducing Galactica. A large language model for science. Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more. Explore and get weights: galactica.org pic.twitter.com/niXmKjSlXW
Small rant about LLMs and how I see them being put, rather thoughtlessly IMO, into productivity tools. 📄 TL;DR — Most knowledge work isn't a text-generation task, and your product shouldn't ship an implementation detail of LLMs as the end-user interface stream.thesephist.com/updates/166861… pic.twitter.com/eEedO8Zf00

How & where do large language models (LLMs) like GPT store knowledge? Can we surgically write *new* facts into them, just like we write records into databases? Explainer 🧵 on how interpretability & model editing go hand-in-hand, and why these emerging areas are so important 👇 pic.twitter.com/MLPWk4pwSG
✨Excited to share a project I've been working on with advising from @karpathy . Introducing stableboost.ai , a practical tool to generate images and videos with AI! 🧵 youtu.be/97m-tfRmDZw
Hi! everyprompt.com is now publicly available! It’s a pretty good playground for GPT3, with cool useful things like folders, team support, a CI/CD, and all sorts of goodies. we think you’ll like it, we do! pic.twitter.com/0ASFgjLgN5

It's not just Stable Diffusion that has been getting a lot of attention lately. The whole generative AI landscape is blooming. @sequoia has come up with a handy little visualization to help us keep track of some of the most exciting players. 1/2 pic.twitter.com/2yZMM6VLuy

There's a debate on AI writing tools on Twitter right now. As an AI writer, I want to give my 2 cents. Here's my hot take: Mastering human language is out of reach for AI and it will remain this way unless current paradigms change radically. Here's why:
Hot take: everyone is wrong about AI writing tools. here's my 3-part theory why...
Any applied AI product today that relies on LLM’s must be able to adapt any new model to their use case within a week.
Spent last week building a tool that creates Stable Diffusion prompts given an image. It works decently well, see below. It can suggest prompts even if no one has created a similar image before. It’s been useful for me so might be for others too: latentspace.dev :) pic.twitter.com/ZSpSzkhV6s
Prompt-to-Prompt: Latent Diffusion and Stable Diffusion implementation with @huggingface diffusers is out github: github.com/google/prompt-… pic.twitter.com/QoIsax3xB1

Steps to using textual inversion with #automatic1111 : 1. Train your concept on colab colab.research.google.com/github/hugging… 2. Download your .bin (or anyone else's from huggingface.co/sd-concepts-li…) 3. Change filename to conceptX .pt 4. Move to embeddings folder 5. Use conceptX in your prompt pic.twitter.com/tEZjIKZcVx

♻️ Every single way to use @StableDiffusion: from "no technical skill needed" one-click installers, to the most cutting edge repo forks and note books github.com/sw-yx/prompt-e… Completely revamped, accumulating 1-2 months of HN/Reddit comments. pic.twitter.com/7M5RcdaKFA

By far the greatest benefit of using @Github Copilot so far is I now don’t have to be forced to document my code. I actively *want* to write great comments, because when I do, I get the dopamine hit of a good Copilot suggestion.

- Interesting
Understanding HTML with Large Language Models - Does in-depth analysis of HTML understanding models models. - Creates and open-sources a large-scale HTML dataset distilled and auto-labeled from CommonCrawl. proj: sites.google.com/view/llm4html/… abs: arxiv.org/abs/2210.03945 pic.twitter.com/zL4c56CjXw
I've been using OpenAI's Whisper model to generate initial drafts of transcripts for my podcast. But Whisper doesn't identify speakers. So I stitched it to a speaker recognition model. Code is below in case it's useful to you. Let me know how it can be made more accurate. pic.twitter.com/Fwx5XOnqbu

dreambooth #stablediffusion training is now available in 🧨diffusers! And guess what! You can run this on a 16GB colab in less than 15 mins! Github: github.com/huggingface/di… Colab for training: bit.ly/3SGPYmk Colab for inference: bit.ly/3UJ4oUL pic.twitter.com/XtIbLRsLSQ

If you have an Apple M1 or M2 and don't take advantage of its GPU, I'm about to change your life. These instructions allow TensorFlow to use your GPU: 1 of 10
Tired of battling with the wild west of large language model prompting frameworks and APIs?! We’re excited to introduce Manifest, our python framework that makes prompt programming simple, interactive, and reproducible. 💻: github.com/HazyResearch/m… pic.twitter.com/KWTeChsu4R
Since I discovered prompt injection, I owe you all a thread on how to fix it. TLDR: Don’t use instruction-tuned models in production on untrusted input. Either write k-shot prompt for a non-instruct model, or create your own fine-tune. Here’s how. pic.twitter.com/GlrCNHcMYC
