fenil suchak

Fenil Suchak

The forcing function we built for customer success

The Forcing Function

A concept we internally introduced to help customers reach the end metric < 2 weeks.

Why this?

We realized that GTM had become some weird creative game where customers were thrilled by the data, the signals and what complex things they could come up with it.

We loved it. They loved it.

But then came the final boss Customer Success and ROI.

We realized we had never talked about it.

Customer Happiness != Customer Success

In B2B it's easy to make whatever customer want but if they aren't winning with it ~ they won't rapidly pull more of the core value out.

So we had to hard focus on customer success, it's a sprint but we focus on removing ANY blocker at all from helping customers succeed.

Literally ANY even if it's not related to the product.

We define what success looks like with OpenFunnel and help them operationalize it in their daily routine.

We don't changing their existing process, but being invisibly but actively in their daily success routine.

fenil suchak

Fenil Suchak

Why AI-SDRs don't have skin in the game

SDRs have Urgency + Hunger.

Automated Engineered AI Outreach absolutely lacks it.

I see founder-like hustle when I talk to SDRs.

They have clear metrics. Clear numbers. Clear targets.

This makes them hyper focused on the goal.

A day of dialing is super mission critical for them.

AI-SDR is a slop machine because it's just a generative machine.

AI (or GTM Engineers) have no skin in the game, no incentives or true feedback.

When feedback for meetings not booked hits an SDR - great SDRs auto-improvise (or can given access to the right & easy tools)

They switch channels. They go find the person. They look for better things.

It's all dictated backwards form a true feedback loop.

Which AI-SDR lacks.

fenil suchak

Fenil Suchak

Why we scrapped our outbound execution engine

Something we refrained from building is an outbound execution engine.

We built it at one point and then completely scrapped it.

Why?

Two points:

  1. Focus on the Real-time Data Layer

At early stages, your business helps you focus on the most mission-critical, revenue-influencing part of the product.

If it's the automation or outbound execution layer - it almost always becomes mission-critical, and your focus shifts entirely there.

This takes away focus and precision on the data layer, making your data layer a commodity and causing your users to start getting data from other sources.

  1. The dying trend of outbound email/LinkedIn blast automation

With Claude Code, it's super clear that the number one bottleneck was operationalizing, state management, and the number of clicks needed to manually outbound to an audience of, say, 50 people. The modern UX is a nightmare for that.

The solution to that isn't spray and pray - it's reducing the number of clicks to zero for manual outbound.

This way, the human in the loop always exists, but they're able to do much more precise outbound in terms of volume.

This allows for better ideas around the data layer, precision in finding pain points, accurate timing, and trigger-based outbound.

We see cold-calling-like mechanics happening to LinkedIn outreach and emails - where the human parts are essentially baked in.

fenil suchak

Fenil Suchak

Reps Should Wake Up to Alerts Not Empty Pipelines

Pick up the phone and start dialing.

The endstate of outbound isn't lead gen strategy - it's the rep on a call with the right person at the right time.

Most teams operate out of datadumps & lists. Buy a list. Burn through it in 2 weeks.

Then what?

Spam the same people harder. Or spend SDR time sourcing new ones.

Both are inefficient. One is spam and the other is wasted capacity.

The market moves daily. Net-New accounts surface that are ready to invest to solve a pain point emerge - daily.

That window is short and reps need to be there when it's happening.

Here's how we think about it for OpenFunnel customers:

Day-0 - Configure signal based new account discovery cannons.
Domain-specific signals - not just funding and job changes, but custom promptable to search for companies with active pain-points.

Day-1 - Route to reps. Round robin, ROE-based, whatever your motion is. Slack alerts for every rep on their already-assigned accounts. Everything auto-writes to CRM with SOTA Deduplication logic.

Day-2 onwards - Reps wake up to alerts. Contact, enrichment, context, and a clear reason why this person is facing a pain-point right now.

No ramp. No sourcing. Just action.

We've seen this clearly - the closer to signal the outreach happens, the higher the pipeline probability.

Manual outbound first. Cold calling second. Automated multi-channel only if you have to.

aditya lahiri

Aditya Lahiri

Domain Resolution Needs Reasoning Not Rules

Your CRM thinks it has clean data. It doesn't.

And no string-matching algorithm is going to fix that, because the problem was never about strings.

- getstripe.com and stripe.com are the same company, just different marketing domains.

- aws.amazon.com is a subsidiary of Amazon, but careers.google.com is not, even though the same logic would say it is.

- novonordisk-us.com is Novo Nordisk operating in the US, not a separate entity.

- A bit.ly link gives you no company information at all until you follow it.

And the CRM data underneath all of this is already messy, full of wrong domains, duplicates, and entries that passed whatever basic check was in place at the time.

The real question is not a string match, rather, do these two domains belong to the same company?

That question needs judgment, not logic. It needs reasoning!

So the system we built reflects that. It resolves domains to DNS, scrapes homepages, uses an LLM to compare content and make a call on company identity, adds web search to reason about subsidiaries, and uses heuristics and world knowledge to generate the right candidates before any of that runs.

The architecture is layered because the problem is layered.

Kudos to Swarnab Garang for executing this end to end!

fenil suchak

Fenil Suchak

Reps Are Wired for Lists. Markets Work on Fires.

The Quantity isn't enough.

Early days of OpenFunnel - we constantly heard the quantity of prospects is too less, we need way more.

This baffled us.

We were finding net-new prospects with high relevance, timing and inferred pain.

And thought how could we possible manufacture timing and pain for more quantity.

Then we dug in.

Reps are wired for volume & lists. Databases with less than 6 digits don't even make the buying talks.

They'd burn through a massive static contact list (with all the sprinkles of technographics, maybe) in 2 weeks. With not great results - because completely off on timing.

So they did what made sense - spam till someone replies.

But all channels are now spammed.

The new savior is "Signals". But the framing is still WRONG.

People are still thinking about it as - how many prospects can I buy that have signals.

The real question is - how many uniquely valuable signal cannons can I deploy that keep giving.

A signal cannon fires when something real happens in the market.

You don't work a list. You respond to fires - each one with context and a reason to reach out baked in.

More cannons = more volume. But only if the cannons are good.

Volume isn't front-loaded anymore. It's spread daily over time.

You never burn through anything. The cannons keep firing.

Reps wake up to new alerts every day - enough volume but spread out over time.

aditya lahiri

Aditya Lahiri

Customer Enablement and Agent Enablement Are Not the Same Job

Field CTO

A term I heard at a conference organised by Braintrust

I experienced the role end to end while onboarding and supporting our newest customers.

But somewhere in the process, things changed.

Customers weren't asking me to walk them through the product.

They were asking me to train their agent on how to use it.

Customer enablement assumes a human on the other side who needs to understand.

Agent enablement assumes a system on the other side that needs to be configured.

That's not the same job.

Field CTO made sense for the last era.

I'm not sure what we call the next one.

fenil suchak

Fenil Suchak

30% discount if you hit your business KPIs with OpenFunnel

Here's why we offer this on our Enterprise Plan

Post onboarding, we were tracking usage, consumption, and credits as the primary indicators of customer success.

Those numbers were consistently strong.

But upsell and enterprise conversion kept getting delayed in a way that didn't match what the product data was telling us.

So we went back to customers to understand what was actually happening on their side.

Credits consumed meant nothing internally for them.

They could show agents running and insights generated but couldn't connect it to a business KPI their leadership cared about.

The product was working but there was no urgency to expand or renew.

We had built the entire post-sales motion around our metrics instead of theirs.

So we changed it.

The first call now starts with aligning on the specific business KPIs the customer needs to hit, planning execution around those goals together, and if they hit them within a defined timeframe, they get 30% off for that period.

aditya lahiri

Aditya Lahiri

Agents Don't Sleep and Neither Does Your Product Now

I got a production alert at 3am last week.

Not a bug. Not an outage. An agent doing its job too well.

At OpenFunnel, our customers' agents have figured out how to prompt our system efficiently. Agents don't pace themselves. They don't take breaks. They don't wait for business hours to escalate.

Rate limits. Cool-off periods. Agents don't clock out. The best human power user still sleeps. Agents don't.

Building for them means accepting that your product has no off switch.

fenil suchak

Fenil Suchak

We Stopped Doing Customer Success. We Do Agent Success.

We felt our customers were under utilizing OpenFunnel.

We talked with everyone in the team - they were underusing it because they had so much on their plate but they were constantly in stress for not using OpenFunnel more.

Insights firing every hour - consumption was the bottleneck and we don't believe in AI-SDR without human oversight.

Signal generated. Nobody acts on it. It goes stale. Competitor picks faster.

Something changed with Agents.

Their agents became the user. Dedicated bots. Wired to use OpenFunnel.

Insight fires, agent acts. Immediately. Every time. No backlog. No stale signals.

Same headcount on their side. Full consumption on ours.

Agent activation actions (our internal success KPIs) went from 100 to 2500 in three weeks.

The agents started auto-optimizing. Deploying more signals on OpenFunnel. Increasing their own consumption without anyone instructing them to.

We shifted our focus from Customer Success to Agent Success and Agent Enablement.

aditya lahiri

Aditya Lahiri

We Started Training Our Own Models for GTM

Most GTM tools use a foundational model API and call it a day.

We use LLMs for most things too, but sometimes it is like hitting a nail with a sword.

To make GTM intelligence scalable and fast, we now train some of our own models on terabytes of people and company data. Creating inferred attributes from scratch - things no data vendor sells.

We use Baseten to power that infrastructure. Their team came by the office this past week and brought their inference engineering book plus a lot of good energy.

This is turning out to be one of the more important bets we have taken.

fenil suchak

Fenil Suchak

Always-On Agents Fix What Credits Could Not

We over-indexed on credits. Here's the problem.

The idea was: more real-time events, more consumption, more action and more credits consumed.

What we didn't account for: the consumer was a human.

GTM teams are buried.

Action on real-time triggers get missed. List based outbound becomes the norm.

But markets don't wait for you to make a list when you have the time.

Continuous Event based outbound is not optimized for human consumption.

Always on Agents change this. Always-on agents don't batch but rather execute downstream tasks based on real-time events.

Agents feeding agents is when credit based model actually kicks in.

fenil suchak

Fenil Suchak

Your Next Reviewer Is an Agent Not a Human

Agent reviews will be a thing pretty soon.

Not human reviews. Not G2. Not Gartner.

Agent reviews.

Here is what that means.

Your software's only job now is to be good enough to be fed to an agent.

Get your Data Layer, APIs, Docs all agent ready.

The agent will objectively pick the best tool for the job.

The one that is easiest to access, has better metrics, speed, accuracy.

The one that makes it most efficient at executing.

The one that gives it all the domain knowledge it needs.

This is the most brutal selection pressure for SaaS.

Cold hard rational evaluation. (until of-course it hallucinates - which it often does)

fenil suchak

Fenil Suchak

Agent Enablement Is Already Here

Here's what's actually happening in my day-to-day right now:

We provide high quality support and debugging over Slack. It's where we talk to customers.

Something interesting has started happening.

Customers now have AI agents/clawdbots that they feed our Slack conversations into. The agent ingests it, figures out what needs to happen, calls the right APIs and MCPs, and replies back.

The human on their side is essentially just a relay.

I'm not talking to my customer. I'm talking to their bot.

And I only realized how real this had gotten when I looked at how my own messages had changed. I've started writing things like:

"Hey agent, you'd need to do it this way..."

Fascinating times to be in.

fenil suchak

Fenil Suchak

Signals Tell You When. Humans Tell You How.

"I never take vendor calls but your approach was good, I have to say"

"Thanks for reaching out"

These are common outbound responses I get when I reach out manually and usually converts to meetings very quick.

The secret?

Be helpful, thoughtful & provide value upfront in crafting the message.

Yes - I do use a ton of real-time market insights from OpenFunnel to decide, who and when to reach out, but that's it.

From then on it's human in the loop - the signals and insights fuel my understanding about the prospect & timing - there is no point of referencing the signal in the outbound message, it's implied by your presence there.

I straight jump to value and how it's uniquely different - provide something they can see upfront

And if they like it - they reply.

aditya lahiri

Aditya Lahiri

Builders No Longer Need to Anticipate User Intent

UI/UX was always a translation layer.

The founder had an intent.

Something they wanted to communicate to the user, or understand. So they translated it into a button. A flow. A label.

A "checkout" button is just a translation of "we want you to buy this."

A settings page is a translation of "here's what you can control."

The user had an intent too.

But to express it, they had to learn the builder's translation first. Click this.

Navigate there. Find that setting.

Two sides. Both constantly translating. The best products just hid the friction well.

LLMs didn't make that translation smoother.

They eliminated it.

The user just says what they want. No decoding. No learned patterns. No button that only approximates their intent.

No flow that forces them down a path the builder imagined.

The builder doesn't need to anticipate every user intent anymore.

They don't need to pre-build a surface for it.

The user speaks directly to the product's ear.

The interface is the intent now.

And that changes everything about how we build.

fenil suchak

Fenil Suchak

We Fired 4 Customers for Being Too Chill

Reasons?

Not in Founder mode.

Had a life outside of using our product.

Not satisfactory usage.

Didn't do 997

Too chill.

Not yelling on Slack that there are bugs.

Ignoring our bugs (which we intentionally put in).

OpenFunnel usage is directly correlated to how much outbound they do and what opportunities they miss.

We started doubting our usage - but the reality was that they were just busy living a life.

This is not toxic. We want our customers to make more money.

So we introduced the Market Leader Plan - the most committed company in the space gets the product (at 10x the price).

They get a dedicated FDE, all domain specific data & resources, ideas on how to move fast + custom integrations.

fenil suchak

Fenil Suchak

Value Slapping Is the Most Respectful Outreach

I have officially changed my name.

On Linkedin.

It now reads: "Fenil, I want to buy OpenFunnel Agent"

When I reach out - it's super to the point that my intention is outreach.

When I send a connection request it is 99% to outbound.

I have researched you. I have a why. I have a theory. I have a story and value for you.

I call it value slapping.

Everyone loves getting slapped by value.

If you have done the work, have a point of view, and genuinely believe you can help - just say it.

That is the most respectful thing you can do.

The name just makes the intent obvious before I even say hello.

And the side effect?

People running automated AI-SDR DM me telling me they want to buy OpenFunnel Agent. :p

fenil suchak

Fenil Suchak

GTM Doesn't Need Code. It Needs People.

An anti-pattern I see happening in real-time.

Engineers are gearing up for a world without IDEs and preparing to when code is not a thing.

Foundation model labs are betting everything on it and we're planning accordingly.

Thinking in specs.

Defining outcomes.

Letting execution happen in the background.

Hard focus on GTM, User Empathy.

At the same time, GTM teams are leaning into code.

Talking Claude Code, databases, Code Gen. Excited to "speak the language."

But here's the thing - they never needed to. I understand the transition phase and the excitement.

But SDRs, AEs who care about their prospects are SO back.

The people closest to code all their life are excited to walk away from it.

GTM doesn't need to get closer to the IDE, it needs to get closer to people.

What survives is the ability to articulate intent with precision. To know what good looks like in your domain and describe it clearly enough.

fenil suchak

Fenil Suchak

The Market Moves. Your Account List Does Not.

Most account lists are built on one assumption - the market is a static entity

You enumerate your TAM once, build a wishlist, operate as if that snapshot stays true.

But a company's readiness to buy is a function of what's happening inside them right now.

Their growth bets - Their strategic internal directions - The initiative that just collapsed or was reprioritized.

None of that is captured in a list built from firmographic and technographic data 12 months ago.

Static lists confuse eligibility with readiness.

A company can be perfectly ICP-eligible with zero active pain.

Another company, one that might not even fit your ICP box, could be acutely feeling the exact problem you solve today.

The static list never surfaces that second company.

You end up overfitting pain narratives onto accounts that don't have the pain. While the actual signal exists elsewhere.

The market is a living entity. Accounts graduate in and out of relevance based on real-time conditions.

fenil suchak

Fenil Suchak

The Spreadsheet Era of GTM Is Over

Much of the GTM workflow tooling is getting poofed by Claude Code.

We saw this coming back in September last year.

You don't need a spreadsheet with integrations anymore. The genie is out.

Every company that exposes an API can be trivially consumed by Claude Code to build any workflow. And most enrichment tools are BYOK anyway - you're already managing the vendor relationships yourself.

The tool is just an orchestration layer on top of your own API keys.

So what's left? A spreadsheet UI with some waterfall logic. That's it.

Now some of these tools are trying to go "UI-less" on top of their spreadsheet UI - which kind of defeats the whole purpose?

But the question is - does gtm have anything beyond enrichment and basic step by step enrichment/filtering workflows?

Don't treat "signals" as something different - it's just a cron-based enrichment if you start with a list.

New tech won't help if GTM is still thinking in terms of list-based enrichment chained to integration providers.

GTM industry as a whole has overindexed on what Clay does.

Enrichment became the entire playbook. And now that enrichment is trivially replicable with code - there's no original thought left to fall back on.

fenil suchak

Fenil Suchak

Why 2nd Party Intent Data Needs an Inference Layer

A case for 2nd party intent data and why it's underlooked.

The term intent data is often used in the context of 1st party like website visits, pricing page clicks or 3rd party black box intent data like Bombora.

But 2nd party intent data has a bad rep of being a commodity because it's often associated to - A funding round. A hire. A job change.

Problem is - it feels un-nuanced and mostly noise.

It lacks relevance to your product and is weak.

And stacking weak independent signals together doesn't make it better.

Funding + hire + job posting isn't a "story." That's just scoring with extra steps.

Real signal depth looks like a story.

A job posting saying "we're building agent evals and monitoring for 99% accuracy in production" and someone just got hired to do exactly that.

Full context. A very specific intent to do things, the pain point, the maturity stage and the person driving it.

2nd party intent needs an inference layer to work.

Raw signals need to be compressed against what your product does to produce a story.

This is what LLMs were built for. And 2nd party data is context-rich and time aware.

Which other signals don't they are individual "hits" website visit+ad click or bombora keyword searched for "AI" - has no context to it.

While building OpenFunnel we naturally started with 2nd party data and trying to make sense of it - as it was in plain sight that LLMs can read and infer.

Now imagine getting pinged daily on "stories" from your entire TAM. Just like news.

When it works you don't get a lead score. You get a relevant story.

fenil suchak

Fenil Suchak

We Rebuilt Our Product Twice to Learn One Thing

The journey from a Slackbot to a Full Fledged UI to a Slackbot again.

During our YC batch, we were strictly in Slack.

We believed traditional UI was not the interface to surface intelligence or agent output.

But we had every sign of feedback from customers telling us that Slack isn't the right interface.

Having a Slack-only bot for alerts put extreme pressure on getting every alert 100% accurate and valuable - anything else and it was spam.

Reasoning models weren't so strong back then and our ICP didn't really think in terms of alerts.

So we moved to a fully fledged self-serve UI.

It helped us speak our ICP's language - showing 1,000s of accounts upfront with high quality insights was an easy buying decision for customers.

They instantly saw the value and bought it.

But operationalizing? Became absolute hell. Endless UI/UX optimization. For a lean team it takes focus away from core value - debating on why someone isn't clicking a button is useless.

But then something started really shifting.

We launched our MCP / API and Webhooks.

Claude Code is picking up and GTM teams are thinking in terms of push/alert-based actions vs static data dump.

It turns out post the initial sell 2 weeks into the product usage, almost everybody prefers MCP / API / Webhooks / Slack

vs pre-sales where they prefer seeing a UI with big numbers to get an idea of the possibilities and an understanding what data exists behind the chat interface.

fenil suchak

Fenil Suchak

Prompt Based Account Scoring With LLMs

Vibe Scoring for RevOps!

RevOps loves to score and tier accounts.

Traditionally, top-of-funnel account scoring is based on firmographics. Technographics. Employee count. It's relatively static.

With LLMs, account scoring is going prompt-based.

Inherently everyone wants to score this way:

"Rank accounts based on domain specific context and propensity of need."

But it's a surefire way to get massive hallucinations if done using off the shelf LLM models (even with websearch)

Without underlying data that you can manually verify the answer for, it's impossible to answer.

100% of the answers will be made up. No real data = false context.

At OpenFunnel, we stitch and maintain every account narrative made of events in that account.

It grows over time.

We combine event timelines from CRM data and external GTM intel.

This generates a true history of every account.

A data layer that is context and time-aware ?

Is much better optimized for vibe/prompt-based scoring.

aditya lahiri

Aditya Lahiri

Dumb Data Versus Smart Data

There used to be accurate data and inaccurate data. Now there's dumb data and smart data.

Data used to be a procurement problem. Buy the best list, the cleanest database, the most complete coverage. If your competitor bought the same vendor, you were at parity. Correctness was the differentiator, and correctness could be purchased.

That model is breaking down.

Dumb data is static and schema-bound.

It answers hardcoded queries. "Engineers in SF at companies over 500 employees." That's a WHERE clause. Anyone can run it.

Smart data is inferred.

"Find people who have been building AI customer support agents" requires stitching together job histories, project descriptions, GitHub activity, interaction patterns on LinkedIn and X. You're not filtering rows. You're constructing relevance from fragments that were never meant to be queried together.

Technically this means storage matters less than inference. Fixed schemas lose to knowledge graphs and embeddings. The query language becomes natural language over semantic indices. Your data pipeline is now a reasoning system.

The raw inputs are increasingly the same for everyone. The differentiation is what questions you think to ask and how well your system synthesizes answers.

Data is commoditized. Inference isn't.

fenil suchak

Fenil Suchak

AI as the Final Abstraction Layer

We're witnessing something interesting right now.

GTM teams are talking MCPs, APIs, opening code editors, talking about databases and rate limits.

And while the curiosity is top tier - I think we're missing what's actually happening.

This is a transition period. Not the destination.

The true promise of AI has never been to turn everyone into engineers. It's something far more profound: technology that finally adapts to us.

Ambient. Conversational. Fluid in whatever language you already think in.

For decades, we've asked humans to learn the language of machines.

We've built abstraction layers upon abstraction layers - Assembly to C to Python to no-code - each one removing friction between intent and outcome.

AI is the final abstraction.

Soon, the idea of "code gen" will feel as quaint as "dialing up" the internet.

You won't configure tools. You'll simply tell them what you need.

For those of us in GTM, this means something important: the skills that will compound aren't technical. They're deeply human.

Understanding your customer's world. Reading between the lines. Building trust.

The tools will meet you where you are and learn you vs you learning them.

aditya lahiri

Aditya Lahiri

Bad data companies can't hide behind fancy UIs anymore.

Polished dashboards and smooth onboarding used to mask incomplete datasets and poor data models. Now users just ask questions in natural language - gaps become obvious immediately. No gradient backgrounds to distract from "we don't actually track that."

This is why MCP adoption is happening fast among data companies with strong fundamentals. Pattern: expose an MCP server, users bring their own Claude. They ask questions, get answers, compare sources without leaving the conversation.

The conversational interface surfaces problems that a carefully designed UI workflow used to hide. Companies that haven't talked to users in years are about to find out what users actually wanted all along when they talk to Claude.

fenil suchak

Fenil Suchak

B2B SaaS Evaluation Comes Down to Intent

There's no true way to compare B2B SaaS tools.

No Open Evals.

It really comes down to two things.

Can it actually answer user intent? And how quickly?

The interesting part - defining what you want turns out to be easy.

And evaluating it turns out to be easy too.

Especially for companies in the data layer.

MCP changes things here.

It's not just a better consumption layer.

It's an eval layer.

Connect it. Ask what you actually want to know. Evaluate the answer.

Over time buyers learn which queries matter to them.

Those become the real benchmark.

Not feature set. Just - did it answer correctly?

Tools that actually work should want this.

It becomes rightly eval based and open.

fenil suchak

Fenil Suchak

What does it actually means for data to be agent-ready?

The obvious answer is APIs. MCPs. Easy to plug into your workflow.

But that's just the piping. It moves data. It doesn't make the data any different.

If what's behind the pipe is the same enrichment fields - company size, industry, job title, revenue etc. - then your agent is not really smart.

And it'll produce the same generic output.

The deeper question is whether the data carries domain context.

Pain points. Insights that actually mean something in your market.

Insight into why a prospect would care.

Not just "here's a list of CTOs in fintech."

But "here's why these CTOs might be feeling this problem at this moment."

That context has to come from two places.

The provider doing the work to encode it.

And you bringing what you know about your own ICP (with intent based queries).

fenil suchak

Fenil Suchak

From Org Charts to Team Clusters

Finding people at B2B companies is about understanding who at the company is the right person to reach out to.

At startups, mid-market - it's relatively straight forward. You find relatively accurate roles, people decision makers.

It breaks down as the the company scales up - different departments, teams within departments, product lines, teams within product lines etc.

It becomes cumbersome to navigate through at the org chart collapses, team hierarchies get strange, roles get fuzzy & similar.

Not all people mention team names on their profile and it becomes impossible to map a team name to people - it's lossy and incomplete.

Something that gives away these insights at reliable scale is

Interaction clusters between team members,

How densely they are interacting with each other on socials,

Insights present in team event related posts

And layer it with info available on people profile

This gives you a deeper picture.

fenil suchak

Fenil Suchak

GTM Data Moats!

One of the reasons any good GTM data tool gets commoditized quick is because they start selling to all competitors.

This strategy is in favor of the seller.

"Your competitor uses X, if you don't use X you'll fall behind."

But with Top of Funnel data that's when the trick falls apart.

You and 10 direct competitors have the exact same data layer. It's a commodity. The only thing left for the supplier to do is add an orchestration layer on top of it.

Thinking of data as software or a tool is probably commoditizing it.

Data is to be served as alpha for teams to win.

Having that commoditized reduces its value the more adoption grows.

It's self defeating.

fenil suchak

Fenil Suchak

Backtesting GTM Signals

At OpenFunnel we've captured signals over the last year across dozens of domains.

We now have true ground truth to test against.

Mapping Salesforce historical opportunity stages (prospect to closed won) for customer CRMs and layering OpenFunnel signals on top to create a comprehensive timeline view.

Detecting patterns and clusters to create "OpenFunnel Primitives" that capture "true actionable insight"

An example abstraction: "Account looking to reduce LLM costs + ICP at that company heavily consuming competitor content" = high likelihood to accelerate deal

Building many such patterns that predict the time to act:

  • Deal Acceleration Opportunity

  • Buying windows

  • Competitor active deals

  • Renewals spotting

Intent signals are all the rage.

Surprising no one's taken an open statistical / evals based approach to modelling which signals actually lead to closed deals.

fenil suchak

Fenil Suchak

Job-Postings Need Context, Not Keywords

At OpenFunnel we emphasize job postings but we also educate customers to look for context and not keywords.

Most data dumpers and technographic providers are in pure sales mode.

Not educating.

Not caring about end customers because data is a blind buy due to volume.

Here are some pitfalls we've seen:


  1. Tech stacks in job postings is no silver bullet

"Experience with Mixpanel, Amplitude, and LaunchDarkly" doesn't mean they use all three.

It doesn't tell you about relative adoption.

Do they already have a product analytics tool? Are they evaluating? Or is it just a nice-to-have?

You need relative research for that. People data. Org/team maturity.

Also matters how horizontal the tech is.

AWS in a job posting tells you nothing. A niche tool mention might give you a clue.


  1. A single hire doesn't mean tool evaluation time

An SDR hire could just be a backfill.

What's the EMEA GTM structure? Is this a proper functional build-out or a replacement?

You need relative knowledge to make the inference. False alarms are plenty.


  1. Technography+Firmography doesn't imply timing or need

RevOps is still used to static territory and named account building.

Companies move in time. Technography doesn't imply need.

For dynamic territories you need context.

Source new accounts with pain/contextual signals in real-time.

Understand what a team is currently building in the context of your product.

fenil suchak

Fenil Suchak

RevOps scoring might be outdated

I've been thinking about this a lot.

RevOps loves scoring accounts to create tiers.

If > 5 qualified hits then tier-1 account.

This still relies on independent prospect events, keywords, fields.

But with LLMs we now have understanding, inference, and reasoning.

So a tier-1 account because they had 5 keywords in their job postings?

When you actually reason through it - often there's no real context, no narrative and the parsed through a reasoning model would just be discarded.

Number scoring on isolated events can lead to a lot of false positives.

Turns out there's little evidence that lead scoring actually correlates with sales outcomes.

Most evidence comes from case studies by GTM tools selling scoring models.

The citations get circular fast.

Questions being - can you actually score need?

Need is an insight. A contextual understanding. An inference.

Think about a strong lead that comes from a dinner conversation. No one's running a scoring model. They're picking up on what's said and unsaid.

That's qualitative judgment. Not arithmetic.

The goal now is to ingest more context and narrative into CRM vs traditional scoring.

Contextualizing need, reasoning, and triggering outreach when the narrative actually makes sense?

fenil suchak

Fenil Suchak

It's midnight and we've discovered a new challenge

A new POC stage prospect wants a very specific database to power their Top of Funnel GTM.

They're frustrated - outbound infra is setup but the data powering it is absolutely dumb - lacks any business context.

We've never had a customer in this domain before.

We don't know if it'll even work.

We've kept OpenFunnel generalizable - think of it as enough ingredients/components to create any Vertical GTM database just-in-time.

The only thing needed is absorbing customer context and hunting for the right combination of prompts and agents.

So we get right into it - prompting, watching results, re-iterating. We know the data layer underneath is capable enough to handle complex scenarios and reasoning models can pick this up.

4 tries and no luck.

5th try and it hits.

The results are immaculate.

We match it with the prospect's customer list and get a 90% hit rate.

Record a loom at 1am. Send it over.

8am - the prospect is geeking out on what we sent.

This is the greatest rush.

fenil suchak

Fenil Suchak

Bottom-Up vs Top-Down Signals in GTM

Top-Down Signals: Start with a list of accounts, then search for signals across the open web or other data sources.

Bottom-Up Signals: Start by defining a signal or pain point, then search for it within specific data sources.

"Find all companies in this source that [XYZ]." (XYZ being pretty generalizable)

This requires custom indexing of the underlying data source to answer intent-based queries.

Why bottom-up gets better context:

Your search is hyper-targeted by context. Every matching account has a strong signal by definition - you've nailed both precision and recall.

Starting with an account list is problematic. Signals happen in real time, and you can't control when they appear. Backdated signals are often stale.

When you search for custom context + timing across your TAM, you surface a larger pool of relevant accounts. Do this over time and you build a differentiated database with context no one else has.

Think of it as building Apollo or ZoomInfo - but for your specific product.

Not a generic contact database. A live context database.

fenil suchak

Fenil Suchak

People Profiles Are Unreliable at Scale

A lot of AEs / BDRs / SDRs are changing their LinkedIn titles to

GTM / Growth @ XYZ

I wonder if this is due to connection success rate being high when GTM / Growth is in the title - as the connection request just shows the title.

But it's usually clear once you scroll down the profile to find what truly the title is

(and sometimes - it's not even there - you have to make smart inferences from their previous experiences are to figure out what they do)

This caused a bunch of problems with our search - the most accurately indexed part of the profile is the title & it gets tricky to find accurate information when the titles are all the same.

But this is part of a larger pattern we've seen with people profiles being unreliable sources of information at scale.

Example -

People update joining dates many months after they've joined

Tech stack and skills which can be mistaken for company tech stack

"Building @ xyz" being the only title across the entire company

What’s something weird you’ve noticed in people profiles?

aditya lahiri

Aditya Lahiri

One Tweet Fixed My Entire Dev Workflow

Sometimes, the best debugging tool is just asking for help publicly 🚀

Last week, I hit a wall with my development workflow. Cursor's Bugbot review was incredibly thorough, but it was slowing me down during rapid iteration.

So I tweeted about it.

5 minutes later: Jon from Cursor jumps in - Bugbot's precision requires that processing time

But there's a better workflow!

Use Agent Review as your "quick check" during development, then Bugbot as the final validation

15 minutes after that: I've restructured my entire workflow. Agent Review for fast iteration cycles, Bugbot for the comprehensive final pass.

Now I have dramatically faster development without sacrificing code quality.

Don't struggle in silence. The builders behind these tools want to help you use them better. Your bottleneck might be one tweet away from a solution.

You can just ask for things!

fenil suchak

Fenil Suchak

Departmental SaaS Needs Buying Windows, Not More Contacts

There's a category between Horizontal and Vertical SaaS - Departmental SaaS.

These companies sell to specific buying committees and teams inside organizations.

And here's the difference:

They don't need more contact data. Their entire TAM is already sitting in Apollo.

They need departmental/team level data - that's missing in these contact databases.

Departmental static data will only take it so far - they also need a unique data point

Additionally what they actually need is insight into "buying windows" at these departments/teams.

The problem? These insights move in time. They're not static fields. They exist today and vanish tomorrow.

Cracking the "buying window" for Departmental SaaS requires daily monitoring with deep context - team structure diffs, department-level goals and roadmaps, the tools they're using, competitor interactions, what they're actively trying to solve.

This is why off-the-shelf data providers fall short.

Their problems are specific and cracking timing at departmental level is hard to do with generic signal tools.

fenil suchak

Fenil Suchak

Spontaneous Germination of Unique Insights

That's not how we typically describe OpenFunnel.

But a prospect we were talking to defined it that way - it helped them map out what we do for themselves and completely understand use-cases.

It's always refreshing to hear your product described by prospects and customers rather than what's already in your head.

And it makes sense: We spot the spontaneous germination of the domain specific insight across your TAM.

For companies targeting startups - we catch new companies way before funding announcements hit the news.

For companies targeting Mid-Market/Enterprise - we spot the spontaneous generation of pain points or buying windows.

Sometimes your best positioning comes from the people you're selling to.

fenil suchak

Fenil Suchak

From Open Search to a Domain Expert Agent

We gave 1,500 users an open text box.

Asked them to search for signals & insights in their TAM.

Search accuracy completely depended on prompting precision.

But here's the thing - that precision wasn't just about how well users defined their goal.

It also depended on the underlying translator.

And the data source itself.

When queries matched the underlying data structure? Results were mind-blowing.

When they didn't? Results were bad.

Even with heavy documentation and guidance, it's nearly impossible to control user prompting.

That's why most platforms don't go beyond simple lead-gen queries. So we flipped it.

Instead of exposing internal data structure knowledge to users, we built an agent.

One that deeply understood customer context.

AND the full underlying data architecture.

It prompted itself. Repeatedly. Across different sources on the platform.

The user just sees the end state.

The final dataset. They approve or disapprove.

The agent takes feedback. Improves. And once it reaches a stable state - it runs autonomously. Daily.

The result?

A Domain Expert Agent that delivers intelligence and insights from your TAM, daily.

aditya lahiri

Aditya Lahiri

Conversational intelligence in GTM (Top of Funnel) comes in two types

People-level - what individuals are talking about. Pain points, plans, complaints.

Company-level - what organizations are saying about specific pain-points, departmental direction, priorities etc.

Most tools focus on people-level. But the channels are messy:

LinkedIn - mostly plans and knowledge sharing. Raw problem posts are rare. Easy to ICP qualify though - so when the right person posts pain-points, it's worth reaching out. But that's a rare find on Li.

Reddit - unhinged, raw, straight to the problem. Deep context. But impossible to ICP qualify. Too many random accounts. Much noise for sales teams with a quota.

Twitter - somewhere in between.

Here's the issue with all of these: People-level intelligence rarely tells you about company budget, org direction, or departmental priorities.

So where do companies actually talk about their needs? Job postings.

It's a rich text document with deep companies speaking plans and ideas and directions and to-dos.

Stack multiple job postings from one company and you get an aggregated story the company is telling about their needs.

You're ofcourse cannot sneak into prospect's standups. But job postings are the next best thing.

Company-level conversational intelligence. Public. Structured. Inferrable. Leading indicators.

fenil suchak

Fenil Suchak

AI-SDR is self-terminating

It kills the very channel it's trying to scale.

AI didn't suddenly 100x email sending infrastructure it gave you more emails you felt okay sending.

Pre LLMs - 500 personalized emails meant a team, manual effort and a lot of "does this sound stupid?" Now it's a prompt and 10 minutes.

AI removed the mental effort it takes to send OK emails

But this broke two things:

Infra collapse - AI spam filters, domains got burned, deliverability dropped

Mental spam filters - this one's the big problem

Even if your email lands. Even if it's genuinely relevant.

Doesn't matter. Prospect's brain is already poisoned.

They've seen 50 "personalized" emails this week. All AI slop. Careless.

So now every cold outreach triggers the same response: "This person doesn't care about my problem. They're just running automation tricks at their job"

Your carefully written, actually relevant email? DOA. Guilty by association.

This is tragedy of the commons playing out in real time.

One company's AI-SDR doesn't just hurt their reputation. It burns the channel for everyone.

Every AI email sent makes the next human email less likely to be read.

End state? Automated outreach dies as a channel. Not for one company - for the industry.

What it amplifies: In-person Careful crafting of lists + timing Deep-Deep research before first touch Proof of effort that can't be faked Patient/Persistent human follow-up

aditya lahiri

Aditya Lahiri

Two architectures for LLM in GTM

  1. The Slopper: LLM as a generation engine.

Search space = entire TAM.

Blast out.

Rely on replies to surface intent.

More output, no reasoning.

Knowledge of intent is only obtained when someone replies.


  1. The Contextualizer: LLM as a search space compressor engine.

Query traces. Parse unstructured sources. Monitor state changes.

Estimate probability of intent. Compress search space.

Output is a list of ranked leads with context.

Humans or downstream agents handle decision-making and action.

One architecture treats LLM outputs as the product.

The other treats LLM outputs as inputs to a decision layer.

The goal isn't more generation. It's faster time-to-knowledge.

fenil suchak

Fenil Suchak

Feature Launch: Closed Lost Actions

Closed losts are usually lost due to timing or budget, not fit. The context you learned during the sales process - why they didn't buy, what they cared about - shouldn't go to waste.

On CRM connection, OpenFunnel takes your closed lost accounts and does two things.


  1. Monitors them for revival insights. Job changes, deep job posting signals, social conversations, competitor interactions.

Default prompt: "monitor this account for any movement related to my product"


  1. Generates signal-rich lookalikes autonomously.

Accounts similar to your closed lost that have an "in" today. You can apply your sales learnings from the closed lost to these new accounts.

You can also start monitoring the lookalikes just like the closed lost in one click and pipe to CRM.

Closed losts are context-rich. You know what they cared about, why they didn't buy. That context helps you work the lookalikes smarter.

Revival + lookalike discovery can happen directly on OpenFunnel.

aditya lahiri

Aditya Lahiri

Feature Launch: Interaction-Based People Finder

LLMs helped move from keyword to context/meaning - so still relying on role keywords to find people at larger companies is massively in-efficient and underutilizing the capabilities.

With new products, markets, and pain-points emerging, finding the right people based on full context of their interaction becomes super powerful.

How it works: Instead of trivial role searches which lack depth, you can do context search like "Find people at a target company who actively might care about xyz pain-points"

Role titles are static and generic. They don't tell you who's actually involved in what.

OpenFunnel auto-enriches every account with the right ICP role people who are most actively engaging with content related to your product, your competitors, and leaders in the vertical.

Finding the right people at larger companies just went from role-matching to context-matching.

Do check it out!

fenil suchak

Fenil Suchak

Feature Launch: Real-Time Insight Capture with Auto-Enrichment

What is it? Insights are action-worthy but time-sensitive.

The window to act - and (re) act - closes fast. Real-time insight search + auto-enrichment gives you the edge.

An insight captured in a new account might not signal a buying window yet, but it's still worth acting on:

Add to ad campaigns Set up a nurturing sequence Configure activity tracking to monitor follow-on events worth sequencing on

For any of this to happen well, auto-enrichment is essential.

Once a new account surfaces with a specific insight, the hard part is de-anonymizing it to the right person. Role titles don't capture specifics - you need contextual enrichment to find the person most likely associated with that insight.

OpenFunnel does contextual enrichment based on people interaction patterns and contextual search.

Why this matters? Real-time insight + auto-enrichment = new insight-rich accounts paired with the right people, ready to drop into action sequences immediately.

Can't wait for you to try it.

aditya lahiri

Aditya Lahiri

Feature Launch: Verticalized Insight Tags

Insight tags are super valuable but a hard UX+LLM problem.

They involve user understanding/inputs + LLMs to deliver accurately without hallucinating.

How it works:

User defines tags like

"companies looking to enhance their document processing accuracy with AI"

These insights are specifically useful to customers in the document processing space who differentiate on high accuracy.

Why it's hard: Surfacing companies having complex insights like these is prone to hallucinations, noise, and wrong reasoning.

The query is intent-heavy.

Even a great vector, or hybrid search would fail if used directly - the query needs an intent qualification LLM layer to surface accurately.

Doing this at scale becomes a challenge of speed - hence needs to be pre-computed and is super hard on self-serve to get right.

The other challenge: Making users define these queries. In hindsight they look simple and obvious, but for customers to define them takes a lot of iterations.

Technographic tags are obvious to define, but not very useful for outbound and timing - Insight tags on the other hand are hard to define but extremely valuable when cracked.

Do check it out!

fenil suchak

Fenil Suchak

Feature Launch: Vectorized Lookalikes!

The most requested feature from our customers is live today.

Why this? Off-the-shelf lookalikes are keyword-based, generic, and full of false positives.

How it's different: We don't search a generic 20M company database. We search within your created TAM on OpenFunnel.

What makes it work: Our dense vectors encode a company deeply: What the product does? Who uses it daily? What triggers the buying decision? What they'd do without it? Their target market? Their GTM model?

We match on vector similarity, then match for intent.

Why it's more actionable?: Every company on OpenFunnel already has signals and insights attached. So when you find a lookalike, you know exactly what's happening at that company right now. Lookalikes without context don't take you far. This solves that.

One-click closed-won lookalikes: We pull your CRM data and auto-generate lookalikes based on your wins. Ready to reach out immediately.

Can't wait for you to try it.

aditya lahiri

Aditya Lahiri

GPT-OSS Is Now Our Workhorse for Bulk LLM Operations

We're processing hundreds of thousands of rows at database scale. And GPT-OSS has become the workhorse.

Here's why it's changed our workflow:

  • Standard classification and reasoning tasks? GPT-OSS handles them beautifully. Entity extraction, data categorization, reasoning problems - tasks that used to require custom models or heavy preprocessing.

  • Cost is negligible. At scale, this matters. We can afford to throw the model at problems we'd normally script around.

  • Speed is insane. What used to take days of batch processing now runs in hours. Low enough latency that parallelization actually works.

  • Multi-cloud flexibility. Since it's open source, it's available across all major cloud providers. We can distribute our credits and optimize for availability. No vendor lock-in.

Honestly hoping we see more open source models from OpenAI this year. The sweet spot of capability, speed, and cost is exactly where production AI needs to be.

What models are you using for bulk LLM operations at scale?

fenil suchak

Fenil Suchak

Spotting Buying Windows Before Your Competitors Do

For B2B SaaS, timing is everything.

For companies whose customers are online, contact data is a commodity.

Traditional Signals are a joke. Super generic to be useful.

That leaves no free lunch. All channels are spammed.

The edge is knowing when and which accounts to double down on, so that you can send them a cake or invite them to your wedding.

Real buying windows are nuanced to each vertical and product.

With LLMs, reasoning models, and context graphs, can we infer these windows from public data alone?

Think about what's available: People data snapshots over time Their engagement patterns Job postings and their context Company data stacked historically

Temporal movement and Context tell a story.

Can these patterns be reasoned through and generalized with LLMs?

Can you predict with measurable confidence that a buying window is open?

This is something we look to tackle this year.

aditya lahiri

Aditya Lahiri

Engineering predictions 2026

More and more of planning - less and less of writing code by hand (~0 for all)

  1. New UI/UX standard emerges around LLM first SaaS

  2. Frameworks die

  3. Open Source models come back head to head vs frontier closed source

  4. Only frontend focused roles die

  5. More companies focus on AI Code Review, as AI Code Gen sees clear winners

You got any?

fenil suchak

Fenil Suchak

2025 wraps up - it's been a ride!

We went from navigating some of our toughest times as a remote team across timezones to landing some of our largest customers.

Along the way we hit some awesome milestones: ~1500 self-serve users on the platform 30+ high-ACV customers and growing

Late 2024, we realized LLMs are massively underutilized in GTM.

Most tools were solving obvious problems like list enrichment and CRM cleaning, or using LLMs as an execution layer for outbound - big problems, but ones that undermine what LLMs can actually do:

Search, research, strategize, build, and watch your TAM

Aditya and I became obsessed with the missing layer: real-time intelligence and surveillance capabilities of LLMs.

LLMs ability to compress, read, reason across multiple sources, and surface real-time insights that are compelling to act on.

Going into 2026, we're focused on building

"Real-time Surveillance Agents for your TAM"

Going beyond just modeling ICP adding a time component to it, to accurately surface buying windows, active competitor deal, and renewals that matter to you.

It's going to be a super exciting year ahead.

aditya lahiri

Aditya Lahiri

The LLM Personalization vs Latency Tradeoff in SaaS

We've been debating this constantly at OpenFunnel. It's the new fundamental tradeoff in LLM-first SaaS.

Here's the tension:

With a few fast LLM calls, we can make everything hyper-personalized.

Contextual insights. Specific reasoning. Information tailored exactly to what this user cares about right now. It hits the user instantly and they "just get it". Its like a friend that knows them + their business is giving them information through our product.

But nobody wants to wait 3 seconds for their results.

The old solution was simple: Pre-compute everything. Store it. Serve it instantly. Fast, but generic.

The new problem: Users now expect both. Personalized AND instant.

So the engineering challenge becomes: What do we pre-compute and store? What do we personalize at runtime? And how do we make that runtime layer fast enough that nobody notices?

Our current approach: Fast open-source models for the personalization layer Vector embeddings for instant semantic search A ton of optimization work on the unsexy stuff (caching strategies, parallel calls, smart pre-fetching)

The bar has moved. "Fast or personalized" isn't a choice. Users NEED both.

That's what makes this fun to build!

fenil suchak

Fenil Suchak

Job-Posting Technographics Are Enrichment, Not Signal

Job-posting technographics are the new rage, there's high demand but we tell our customers and prospects that there's a problem.

Most GTM teams treat technography as golden intent. It isn’t. It’s enrichment. Helps with inbound. Helps when there’s already interest. Good context and talking points before a call. Not a buying signal on its own.

There’s another problem too.

Companies casually write "experience with XYZ" in job posts. And as horizontal tools spread, this shows up everywhere.

If you capture all of these, you drown in noise. People try to fix it by counting mentions. But at scale, that breaks.

Bigger companies have multiple teams and multiple orgs. Keyword volume stops meaning much. An actionable insight is different.

"We are hiring you to build ABC with XYZ because we are facing this"

That is insight to act on.

A project spinning up. A buying cycle forming. A bidding window about to open.

This is the signal GTM teams need to capture in real time.

Because it actually predicts need and pain.

fenil suchak

Fenil Suchak

Why We Only Do Personalized Demos

We stopped doing broad demos because showing everything at once looks impressive but rarely converts. Prospects connect with only a fraction, and the rest dilutes value. So we rebuilt the process:

  • Every demo shows their end state after setup

  • Their vertical, GTM motion, and product language

  • Their customer lookalikes modeled inside OpenFunnel

The call shifts from a pitch to a jam session - “wait, how did you figure this out?”

And with LLMs plus our internal abstractions, generating a fully personalized demo takes under twenty minutes.

fenil suchak

Fenil Suchak

The 48-Hour Post-Event Window

Most teams lose momentum after Re:Invent, but the 48-hour window after everyone flies home converts the highest.

People are energized, inboxes aren’t crowded, and conversations are still fresh.

  • Send a follow-up with a photo - nobody ignores their own face.

  • Reference something specific you discussed.

  • Ask a small question before emailing; it boosts opens.

Waiting a week turns warm connections cold. Follow up while it still matters.

aditya lahiri

Aditya Lahiri

Who Inside an Account Actually Cares

AI is creating new functions faster than LinkedIn titles can update, making it harder to identify the true ICP inside an account. Reasoning LLMs changed this.
They interpret interaction patterns at scale:

• what people engage with
• who they follow
• which competitors they react to

From these signals, interaction clusters reveal who is actually leaning into GEO, AEO, or any emerging function. When an account shows GEO activity, LLMs surface the real operators long before their titles change, exposing who understands and drives new functions across large organizations.

fenil suchak

Fenil Suchak

LLMs as Healthcare’s New Referral Layer

LLMs are becoming a referral layer in healthcare as organizations invest in GEO to ensure AI understands specialties and answers intent-driven queries.

• Max Healthcare: structured clinical data for accurate department mapping.
• Hone Health: LLM-friendly diagnostic content.
• Claritev: clearer taxonomies for symptom-to-service alignment.
• AlgaeCal: evidence-based bone-health content.
• Progressive Dental: optimized specialty pages.
• Sakara Life: refined wellness program and ingredient data.

Healthcare GEO ensures models surface the right care for the right conditions every time. Fast prototypes cut alignment time and speed up production.

aditya lahiri

Aditya Lahiri

Vibe Coding Inside the Enterprise

“You can’t vibe-code in an enterprise.”

The data disagrees.

  • Meta: AI PMs building internal tools and agentic automations.

  • ESA: teams creating rapid prototypes and data-viz mocks.

  • Autodesk: engineers now required to have vibe-coding experience.

  • Uplers: backend roles labeled “vibe-coding first.”

  • NRG Energy: AI teams accelerating infra and multi-agent workflows.

  • Okta: DevRel and product marketing using v0, Netlify, and Lovable.

fenil suchak

Fenil Suchak

GTM Teams in Founder Mode

GTM and product teams are shifting into founder mode and using vibe-coding tools to prototype without waiting on engineering.

Examples:

  • Keepme.ai — demand gen shipping landing pages with Lovable

  • Blink.new — marketers testing acquisition funnels

  • Scale AI — building demos for client pitches

  • HubSpot — designers shipping POCs

  • Razorpay — AI PMs creating internal tools

  • Eulerity & InOrbit — marketers wiring automations

Fast prototypes cut alignment time and speed up production.

aditya lahiri

Aditya Lahiri

Thought Leadership for GEO Visibility

Thought leadership is shifting to AI distribution. Companies are redesigning roles so expertise appears accurately in generative search.

Teams now work to ensure LLMs surface correct narratives across domains:
• PartnerCentric: affiliate marketing
• Giftogram: gifting workflows
• Forter: fraud prevention
• Meltwater: media intelligence
• Versapay: AR ops
• TEKsystems: expert content
• Dykema: legal clarity
• DTN: weather and energy intelligence

Authority now depends on how well AI describes you, and this shift is redefining how expertise is found.

fenil suchak

Fenil Suchak

Why Founder-Built Products Win Everywhere

We spend weekend watch parties reviewing PostHog session replays.

Watching a user glide through the happy path is great, but seeing them drop off at the final step tells the real story.

These moments reveal whether we designed from our own assumptions or captured their instinct.

We take notes, look for friction, and often turn a single replay into a roadmap update because it shows exactly where intent breaks and what users actually expect.

fenil suchak

Fenil Suchak

Saturday Night Watch Parties

We spend weekend watch parties reviewing PostHog session replays.

Watching a user glide through the happy path is great, but seeing them drop off at the final step tells the real story.

These moments reveal whether we designed from our own assumptions or captured their instinct.

We take notes, look for friction, and often turn a single replay into a roadmap update because it shows exactly where intent breaks and what users actually expect.

fenil suchak

Fenil Suchak

Why Personalized GTM Databases Matter

Personalized databases work because every product has its own context and triggers.

We build real-time GTM databases from live activity across your entire TAM, surfacing new companies the moment your indicators appear and identifying the right people based on how they engage in the ecosystem.

In AI-SEO, that means spotting teams building GEO or LLM SEO functions and the people interacting with leaders.

Off-the-shelf tools miss this with small static lists and broad, commoditized signals.

fenil suchak

Fenil Suchak

What Is a Personalized GTM Database?

Most "GTM databases" are just contact directories flexing "50M companies"
→ but how many are actually active?

Why account tiering matters:
Horizontal SaaS serves everyone. You need to prioritize based on timing, activity, and traits - not surface filters.

The problem:
Techno-graphics work for qualification, not timing. They don't show what's changing, why it matters, or who's involved.

Static databases → wasted credits on inactive accounts.

Companies move in time.
Last month's non-priority could be Tier 1 today (like adopting usage-based billing).

aditya lahiri

Aditya Lahiri

Is the Data Layer for Agent Consumption Just Text-to-SQL?

Agents don't think in filters - they think in meaning.
They combine world knowledge with user context to craft semantic queries based on intent, not keywords.

The problem: The industry is stuck translating natural language into pre-set SQL filters. Rigid schemas. Static filters.

Our agent-native data layer: Vector embeddings + semantic matching for company activities:

  • What functions are they building?

  • What migrations are happening?

  • Who's joining and leaving? Static filters for deterministic data:

  • Headcount, location, funding stage The bet: Agents are smart enough to query both static and semantic fields. We're building for Agents that reason around meaning.

aditya lahiri

Aditya Lahiri

How Are Fast, Cheap Open Source Models Changing How We Build?

Inference providers like Groq unlock two shifts:

  1. LLM-native architecture We replaced deterministic code with LLMs from day 0.

Example: Natural language blocklists. Instead of hardcoded lists, we use Groq + web search in real-time. "Exclude marketing agencies" just works.

  1. Rapid prototyping with model swaps

  • OSS models for intermediate reasoning

  • SOTA models for complex reasoning

v1 fast > perfection. Optimize later based on usage.

The insight: Orchestrate a hierarchy of models - cheap/fast for most flows, expensive/smart only when needed.

fenil suchak

Fenil Suchak

What Does GTM Data Layer 2.0 Look Like for Humans and GTM Agents?

We're studying GTM timing, movements, and real-time signals that reveal pain-points.

One thing became obvious: People and Company APIs are still CRUD operations.

They weren't built for GTM search or insight discovery.

Getting to meaningful insights is slow, inefficient, and fragmented.

The shift: Many are refactoring code-bases to make them agent-ready. The same shift is coming to GTM.

A data layer for agentic search looks nothing like CRUD APIs.

Converting natural language to API params repeatedly is wildly ineffective. We're just masking text-to-SQL as search.

aditya lahiri

Aditya Lahiri

What Are the Dumb Ways to Die as an AI Startup in 2025?

Assume models won't get better Building your moat around "GPT-5 can't do X yet" is a death sentence.

Your defensibility must be orthogonal to model capabilities.

  1. Treat evals as performative Without rigorous evaluation frameworks, you're flying blind. Evals are your early warning system.

  2. Vibe code production features Ship fast, not recklessly. In AI products, trust compounds slowly and evaporates instantly.

  3. Skip in-person at SF The density of AI talent, customers, and capital in SF is unmatched. Serendipity still matters.

  4. Never build a data moat Proprietary data and feedback loops are real differentiators.

  5. Treat non-AI infrastructure as secondary Brilliant LLM + broken integrations = customers leave.

Which mistake do you see most often?

fenil suchak

Fenil Suchak

Are Speed and Insight the Only Moats in GTM?

With competitors appearing weekly, timely and precise outreach wins.

If you're using static filters - even technographics - you're too late or off on timing.

Funding signals are commoditized. Everyone has them, reacts, and it's signal slop.

You need creative timing and insight - leading indicators of pain with nuance.

Great GTM teams spot live, nuanced movements:

  • Job posts revealing plans

  • Website/pricing changes

  • Hyper-specific layoffs

  • Sub-departments static too long

Find leading indicators of pain.

That's where timing wins and GTM teams succeed.

Made with

in SF

© 2026 OPENFUNNEL. ALL RIGHTS RESERVED.

Ask AI about OpenFunnel

Made with

in SF

© 2026 OPENFUNNEL. ALL RIGHTS RESERVED.

Ask AI about OpenFunnel