How Are Fast, Cheap Open Source Models Changing How We Build?


Aditya Lahiri
CTO & Co-Founder @ OpenFunnel
Inference providers like Groq unlock two shifts:
LLM-native architecture We replaced deterministic code with LLMs from day 0.
Example: Natural language blocklists. Instead of hardcoded lists, we use Groq + web search in real-time. "Exclude marketing agencies" just works.
Rapid prototyping with model swaps
OSS models for intermediate reasoning
SOTA models for complex reasoning
v1 fast > perfection. Optimize later based on usage.
The insight: Orchestrate a hierarchy of models - cheap/fast for most flows, expensive/smart only when needed.





