GPT-OSS Is Now Our Workhorse for Bulk LLM Operations

Aditya Lahiri

CTO & Co-Founder @ OpenFunnel

We're processing hundreds of thousands of rows at database scale. And GPT-OSS has become the workhorse.

Here's why it's changed our workflow:

  • Standard classification and reasoning tasks? GPT-OSS handles them beautifully. Entity extraction, data categorization, reasoning problems - tasks that used to require custom models or heavy preprocessing.

  • Cost is negligible. At scale, this matters. We can afford to throw the model at problems we'd normally script around.

  • Speed is insane. What used to take days of batch processing now runs in hours. Low enough latency that parallelization actually works.

  • Multi-cloud flexibility. Since it's open source, it's available across all major cloud providers. We can distribute our credits and optimize for availability. No vendor lock-in.

Honestly hoping we see more open source models from OpenAI this year. The sweet spot of capability, speed, and cost is exactly where production AI needs to be.

What models are you using for bulk LLM operations at scale?

Made with

in SF

© 2026 OPENFUNNEL. ALL RIGHTS RESERVED.

Ask AI about OpenFunnel

Made with

in SF

© 2026 OPENFUNNEL. ALL RIGHTS RESERVED.

Ask AI about OpenFunnel