How Are Fast, Cheap Open Source Models Changing How We Build?

Aditya Lahiri

CTO & Co-Founder @ OpenFunnel

Inference providers like Groq unlock two shifts:

  1. LLM-native architecture We replaced deterministic code with LLMs from day 0.

Example: Natural language blocklists. Instead of hardcoded lists, we use Groq + web search in real-time. "Exclude marketing agencies" just works.

  1. Rapid prototyping with model swaps

  • OSS models for intermediate reasoning

  • SOTA models for complex reasoning

v1 fast > perfection. Optimize later based on usage.

The insight: Orchestrate a hierarchy of models - cheap/fast for most flows, expensive/smart only when needed.

Made with

in SF

© 2026 OPENFUNNEL. ALL RIGHTS RESERVED.

Ask AI about OpenFunnel

Made with

in SF

© 2026 OPENFUNNEL. ALL RIGHTS RESERVED.

Ask AI about OpenFunnel