Search through your chats and artifacts
Answering the perennial question: what did you get done this week?
Shipping with 💙,
— the interface0 team (Andy, Claude, Codegen, and Cursor)
You can now toggle between workspaces for your personal chats & artifacts and ones for each team you're on — plus an "All" option that shows everything. This makes it easy to keep work and personal content separate, or switch between different teams.
When you're in a workspace, any new chat or artifact is automatically shared with everyone in that workspace.
WhatsApp integration is now live. Go to "WhatsApp interface0" in your settings to verify your phone number and set it up. Once you do, you can just message interface0 on WhatsApp anytime — even sending voice notes. There's nothing easier than popping open WhatsApp and sending a voice note; interface0 will reply right there.
It uses autopilot on the backend, selecting the right model, knowledge, and tools for the job, and using everything else at its disposal, like memories. I'm really enjoying "AI where I already am" (WhatsApp, email). Who needs or wants to open a whole new interface?
[email protected]
with some of this)I built a really slick WhatsApp integration. You can send voice notes (or texts, or images) to interface0 on WhatsApp and it replies right there. It is awesome. But for unknown reasons, Meta has decided I don't deserve to roll this out to all of you. They won't let me finalize registering my phone number. Does anyone know someone at Meta/WhatsApp who can help me out?
[email protected]
from other emails. In settings, you can now add "additional emails" to your account. Once you verify them, you can email [email protected]
from those accounts as well and you will get responses. Useful for use across personal and business emails. Thanks CJ for pushing me on this.gpt-oss-120b
from your models list and ask it for something long (e.g. a memo), it will write it very fast (hence the lightning bolt). When you use that model, it prioritizes Cerebras as a provider, which has really, really fast inference — like 50-100x as much throughput as Claude 4 or GPT-5 (~3,500 tokens per second vs. ~50). Unfortunately Cerebras can't run the major providers' closed-source models, but at least you have one option now. Thanks Cole for the idea.Fixed a bizarre issue with certain new OpenAI models and how OpenRouter handles their tool & reasoning streaming (took more time than I'd like to admit...) — should all work fine now! Also did a big backend services refactor that should hopefully not change anything in your experience, but let me know if you spot issues; hardened the autopilot flow so it fails and falls back to defaults less often; cleaned up a bunch of little edge cases around editing certain things and inconsistent saving; stopped autopilot from getting confused about "always on" tools; improved user usage tracking on my end.
There is now a public changelog page. You're on it.
The new chat page and individual chats load much faster. Previous slowness should be gone.
Added an "upload artifact" button in the artifacts view. You can upload batches of Markdown files (e.g., from Obsidian) and then chat with them.
Better example chats in your inbox. Also, tag in the shared "interface0 info" knowledge folder for product FAQs.
[email protected]
Gmail profile photoJank reports welcome — anything that feels off or buggy. Even if you're unsure whether it's user error, please ping me :)
Now, you can collaboratively edit & store documents in interface0. This week's demo video shows it off. The agent has access to your stored artifacts for context and editing, and can create new ones. You can then edit the artifacts in the chat itself alongside the AI, and it keeps version history across all those edits. Artifacts can be added to knowledge folders or treated separately. Some good use cases:
Let me know what you think and what you use it for! This will end up serving as the foundation for building your knowledge base inside interface0. More to come!
When you refresh a chat, at the bottom of the assistant messages you can now see how many tokens it used, and the context window size of the model. If you're getting close, it'll highlight in red. (Thanks Oliver for the idea)
It is now way better / more in-depth, especially when you have a reasoning model selected. It takes a second longer but I think it is worth it. Let me know if you disagree. (Thanks CJ)
Multiple improvements across the board:
GPT-5 models were added the day they came out, and you also now have Opus 4.1 (caution: expensive!) and the OpenAI open-weight model. I'm still using Sonnet 4 & o3 for most things, but mixing in Grok 4 and GPT-5 too. interface0 also now uses GPT-5 Mini/Nano for a bunch of the behind-the-scenes stuff (triggered messages, enhance prompt, title generation, etc.) and so you should see improvements there.
Web search/crawl, to-do list, and artifacts tools are now always enabled, and the agent chooses when to use them. I also spent some time improving the interface0 system prompt overall — you should see general improvements, especially around intelligent tool use. But, as we've seen from some of the OpenAI sycophancy issues historically: system prompt edits can have unexpected effects. If you spot anything weird, let me know. I will continue to iterate here.
You can no longer delete beta team prompts and templates (thanks whoever deleted one of those for everyone!).
Wrote a bunch of scripts & admin stuff to help manage team onboarding. Much less manual now. Also: you now cannot access team features unless you're on a team plan.
Apologies if you suffered from the ~15 minutes of downtime on Sunday! My bad.
Now, you can interact with interface0 entirely from your email inbox — no need to go to the website. Try it out: forward this very email to [email protected] and say "what's this all about?"
This was a monster feature behind-the-scenes (email is complicated!), and supports a lot of functionality:
Here's an example of just forwarding along a contract draft to agent@ and it responding, taking into account the attached contract and the thread history.
Again: this feature was big, so I'm sure there are some rough edges (especially given variance in different email clients, and the inherent quirks / fickleness of these lovely LLMs). Please ping me if you run into any issues at all.
Thanks to Kevin for pushing me on this feature. Email is a really great interface for AI — I had experimented with this previously, but now with all of interface0's tools/knowledge/personas/etc. it's even more compelling. I'm working on other interesting email integrations too...
... did I mention this was a big feature? As such, the list of other improvements/features is short this week...
Better search performance for users with lots of chats.
Tool calls will now show marginally more entertaining loading text :)
For certain long messages (like email threads with lots of quoted history), there is now an expandable "show additional content" button.
I fixed an issue with long-runtime multi-model synthesis (thanks Senya, Sara, John, and others for reporting) — but for now, you may need to refresh the page to see the results of these long-running tasks. Fixing the streaming will require a bigger refactor to deal with Vercel's function runtime limitations.
"Template messages" (the leftmost button in the bottom chat bar) are now way more powerful. You can now configure templates in that prompt library to enable a specific set of tools, models, personas, and knowledge. This is a big step towards building more comprehensive automated workflows into interface0.
Three examples, all of which are shared with beta testers:
Look at those prompts for inspiration for making your own workflows / template prompts!
A riff on the synthesis tool. The Truth-seeker goes through cycles of proposing an answer → critiquing it from a sort of Popperian/Deutschian fallibilism lens → refining it → critiquing again until there are minimal changes necessary. Just select it from the tools menu and ask the agent to use it. I've been very impressed by the quality of answers so far. By default it uses Sonnet 4 to propose the draft and o3 to critique it, but you can ask it to use different models.
You can now use "/" commands to trigger a template prompt, and "@" commands to select knowledge to attach or tools to use; enter / command-enter now submits dialog boxes; there's more intuitive enter / command-enter / shift-enter behavior in the main chat box; command-u now starts a new chat (thanks Brenner) and command-/ opens keyboard shortcut list.
Enhance prompt now pops up a dialog where you can approve/re-run/edit the enhanced prompt while seeing your existing one. Thanks CJ for the idea. One extra click now but I think it's worthwhile? Let me know if you disagree.
Latex now renders (thanks Brenner); tables and lists look cleaner; headers are detected better (every LLM seems to return these differently); user messages now handle code blocks; public chats now have more formatting included and better styling.
Lots of improvements across the board:
Removed phone/email as default tools (thanks Alexa, sorry we sent Jeff Bezos that email draft...); updated everyone's triggered message settings to a better model and useful tools.
Lengthy transcriptions and agent responses now can run for longer before timing out.
I did a big refactor on the backend to centralize certain configuration data across the app. If you spot any errors / unexpected behavior, please let me know!
Everyone's default persona has now been set to a special, live persona called "interface0." This persona incorporates whatever your existing default persona was, plus it will personalize over time based on two things: ambiently paying attention to your feedback to interface0 (e.g. when you include something like "no that's not what I meant" or "yes that's great" in a message), and explicit "vibechecks."
Occasionally, a button will show up below a message that says "Vibecheck this response," or you can submit one anytime by hitting the indicated icon below a message. This will pop up a vibecheck dialog where you can put whatever short-form (even just a couple words) feedback you want on that response. interface0 will incorporate this feedback, further personalizing your default persona and improving the ✨vibes✨ of interface0 over time.
Again, this can happen both ambiently and explicitly, and I've already seen great improvements in my own account from this ongoing personalization of the product — it's like a powered-up version of memory, where the agent remembers not only facts about you, but also conforms itself more tightly to your explicit and implicit preferences and style.
In your tool selector, you can select a "to-do list" tool. This allows the agent to create and track its own to-do list for that chat. This is very useful for complicated, multi-step tasks. It's worth thinking about if you should include this in any of your complex template messages for nuanced workflows, by asking the agent to first create a to-do list and then move through it. You can see the current status of the to-do list (if it exists) in the chat header.
(Feature inspired by this tweet!)
If you're a Codegen user (OpenAI Codex competitor), you can now enable Codegen in your tool settings and add your API key. I love this — I can now trigger coding agents to work on interface0 or other projects from within interface0 itself, with all the context/knowledge that interface0 has.
Lots of improvements across the board:
In tool settings, you can now choose which model will be used for triggered messages (e.g. from inbound emails), as well as what tools that model will have access to. I want my agents to be able to reply to emails, so I've enabled the email tool, but you don't have to if you're concerned about errant agents.
I added Moonshot's Kimi K2 model to your model selector — it's interesting to play around with, and is getting a lot of buzz (especially around its tool use capabilities). I also swapped the default model to Sonnet 4; it is much better than GPT-4.1 at tool calling.
In the tools menu, you can select "multi-model synthesis" and then ask interface0 to get answers from multiple models. If you don't specify which models, it will propose ones for you. I've been using this a lot — getting the best of asking multiple models to draft an email, or write code, or give me explanations. You can see the individual responses at the bottom of the synthesized message too.
Thanks CJ and Jay for the idea.
You can now add interface0 as an app on your phone's homescreen by browsing to it in your native web browser and installing / hitting share and then "add to homescreen." It's a much better experience on mobile this way! Thanks Jai for pushing me on this.
The sparkly "enhance prompt" button now knows if you have a reasoning or non-reasoning model selected. If it's a reasoning model (Grok 4, o3, etc.), it applies a more in-depth enhancement to meet the needs of that category of models. Otherwise it keeps it simple. I'll continue to improve this to tailor the prompt enhancement to specific models. Thanks CJ for the idea.
When starting a new chat, you can tap the crossed-out camera icon, and that chat won't have access to memories, nor will it create new ones. Useful for chats where you don't want interface0 to permanently remember the content. Thanks Jay for the idea.
You're all now on a "beta tester" team — so you may occasionally see new template prompts, personas, knowledge entries, and chats show up in your interface.
Bunch of mobile UI/UX improvements:
You can now label who you sent an invite to and see the time you sent it (thanks CJ), plus the invite flow is smoother/clearer overall. And the homepage improved again.
Upgraded Grok to Grok 4 for all of you!
Added a nice pulsing gradient animation in a few places to better indicate loading, and also to make the product slightly less boring and black and white...
In settings, you can now create "teams" with other users. Then you can share template prompts, personas, and contexts/knowledge with them — so you can build a prompt library for your company, or a shared context base that all your team's agents can reference.
You can share chats with other users, or with a whole team you are on, and the chat will show up in those people's interface0 inboxes. You can make them viewers or editors — if the latter, they will also be able to send messages in the chat, so you can have collaborative/team-wide conversations with AI.
You can now publish chats and share a public link to them.
You can now edit your last message in a conversation, and the agent's reply will regenerate accordingly. (Thanks Jay for pointing out a bug with this)
"System prompts" is now "personas" and "contexts" is now "knowledge" after some feedback during onboardings.
They weren't seeing much usage, and it took up a bunch of space / was confusing to people. If you miss this feature, let me know and I can figure out how to bring it back.
I spent some time this week making the new chat button snappier and testing security. I will continue to improve on these.
interface0 agents can now use any MCP server, including anything Zapier connects to (i.e. basically any software)! Connect your Google Drive, your Shopify, your email, calendar, Salesforce, whatever and interact with it from interface0. Here's a quick demo video of the MCP integration.
If you don't want to manually choose models, contexts, system prompts, tools, you can now click the 🧠 icon in the chatbox and autopilot will select the optimal settings for you.
Thanks Henry for the idea.
In the "manage" views, you can click the share icon to get a link to send to other interface0 users. For example, here's a template message you could add to your account. (Teaser: I'm working on some "team" features too, so teams can keep templates/contexts/prompts updated together...)
Feel free to send invites to friends (upper right hand corner of the site) and they will land on the new interface0.com.
In the top nav, there's now an option to download the current chat contents for use elsewhere; plus the length of message you can put into interface0 is now much longer. Thanks Tyler for the ideas.
In settings you can now toggle which tools are default-on for new chats, and which specific functionality you want the tools to have.
I updated your selected models to the latest-and-greatest.
In the model selector, pick "auto" and the system will choose for you, considering cost / speed / complexity. You can set "auto" as your default model in settings. Thanks CJ for the idea.
In the top nav, there's a button to summarize the current chat. You can either copy/paste this content, or create a new "context" with it (see last week's update re: contexts). Useful when kicking off an ongoing project.
You can put variables in template messages, and when you trigger the template you will be prompted to fill them in.
In the user menu, you can copy your user-specific "forwarding email." Any email sent to this address will kick off a new chat in your account, and the agent will suggest next steps.
Both chat- and user-specific agent emails are now much shorter than they were before.
This is interface0's equivalent of ChatGPT/Claude "projects." You create a "context" with text and file/image content, and then can tag in that context on any message. That way, a single chat can cross-reference information stored in multiple contexts rather than being stuck in one project.
You can half-ass writing a prompt, hit the enhance button, and it will make it clearer. Inspired by v0.
The hidden system prompt now always includes your name, email, models you could have chosen from (thanks CJ for the idea), and more. Context is king.