I was on a call with a fintech company engineer when the DevDay keynote started streaming. We had the livestream on one monitor and a half-finished RAG implementation on the other. About twenty minutes in, we both went quiet. Then he said, “So… do we still need this?”
That question – “do we still need this?” – is the real story of DevDay. Not GPT-4 Turbo. Not the Assistants API. Not Custom GPTs. The story is that OpenAI just told every team building on their platform: we’re going to own more of the stack now. And you need to decide how you feel about that.
What Actually Shipped
GPT-4 Turbo is the one that matters most for day-to-day work. 128K context window. Better instruction following. JSON mode that actually works. Lower prices. The practical effect is immediate: prompts I was carefully engineering to fit in 8K can now be sloppy and long. Function calling went from “fragile hack” to “usable feature.” Cost assumptions that made certain products unviable are suddenly different.
I rewrote two prompts that week. Both got simpler. Both worked better. That’s the kind of improvement I respect – not a new capability, but a dramatic reduction in friction for existing ones.
The Assistants API is more interesting and more concerning. It bundles threads, tool execution, file retrieval, and conversation state into a managed service. You describe an assistant, feed it files, and it handles the orchestration. For prototypes and internal tools, this is incredible. I spun up a document Q&A assistant in about an hour that would have taken days with our custom setup.
The concern is control. When OpenAI manages the thread, the retrieval, and the tool execution, you lose visibility into what’s happening. You can’t tune the retrieval. You can’t inspect the intermediate reasoning. For a quick prototype, that’s fine. For a production system handling financial data at the fintech company, I need to see what’s happening under the hood.
Custom GPTs are ChatGPT plugins done right. No-code assistants that anyone can build and share. For developers, this is a double-edged sword. It’s a distribution channel – you can ship lightweight tools that live inside ChatGPT. It’s also competition – because everyone else can, including non-developers. If your startup is “ChatGPT but with this one extra feature,” you now have a problem.
The Build-vs-Buy Shift
This is where it gets strategic. Before DevDay, the standard architecture for an AI feature was: pick a model, build a RAG pipeline, manage conversation state, wire up tools, handle the orchestration yourself. Lots of plumbing. Lots of control.
After DevDay, OpenAI is offering to handle most of that plumbing. The question is no longer “can we build this ourselves?” It’s “should we?”
My framework: use the managed path for anything that isn’t a core differentiator. If your product’s value comes from the quality of your retrieval, the specificity of your tool calls, or strict data governance, keep building custom. If the AI feature is a nice-to-have or an internal tool, the Assistants API will get you there in a fraction of the time.
The danger is the middle ground. Features that feel custom but aren’t actually differentiated. These are the ones that will get swallowed by the platform, and the teams building them will realize too late that they have been maintaining infrastructure OpenAI now gives away.
RAG Isn’t Dead (But the Bar Just Went Up)
I keep seeing “RAG is dead” takes. They’re wrong, but the kernel of truth is real. With 128K context and built-in retrieval, the bar for justifying a custom RAG pipeline just got much higher.
If you’re stuffing a few documents into context and asking questions, the Assistants API does this out of the box. If you need precise control over chunking, embedding models, re-ranking, or compliance with data residency requirements, custom RAG is still the answer.
At the fintech company, we’ll keep our custom retrieval. Financial data has strict requirements that a black-box retrieval system can’t satisfy. But I’d estimate that 60-70% of the RAG implementations I’ve seen in the wild could be replaced by the Assistants API with no loss in quality. Those teams should take the free lunch.
What I’m Doing About It
The same week as DevDay, I started a review of every custom component in our AI pipeline. The question for each one: does this still earn its maintenance cost?
Three things survived the review. Everything else is getting migrated or simplified.
That’s the right response to DevDay. Not panic. Not hype. A sober assessment of what’s now commodity and what’s still worth owning. OpenAI moved the line. The smart move is to acknowledge it and redraw your architecture accordingly.