Home » Business Topics » Data Strategist

A/B Testing Pitfalls – Interview w/ Sumit Gupta @ Notion

  • Dan Wilson 

Interview w/ Sumit Gupta – Business Intelligence Engineer at Notion

A/B Testing Pitfalls – Interview w/ Sumit Gupta @ Notion

In our latest episode of the AI Think Tank Podcast, I had the pleasure of sitting down with Sumit Gupta, a business intelligence engineer at Notion, author of The Tableau Workshop, and an expert in modern data stacks, marketing analytics, and business intelligence. With a career spanning companies like Snowflake and Dropbox, Sumit has spent years optimizing A/B testing strategies, analytics engineering, and data-driven decision-making. 

This turned out to be a deep dive into the nuances of A/B testing, the evolution of analytics roles, and the intersection of AI with experimentation. Sumit’s journey from a childhood in Dharavi, India, to shaping business intelligence at some of the world’s leading tech firms is nothing short of inspiring.

From Dharavi to data science: The journey of Sumit Gupta

Sumit’s journey into data science wasn’t linear. Born in Dharavi, Mumbai—one of the world’s largest slums—he learned early on that hard work and education were the keys to breaking out of difficult circumstances.

“There’s no replacement for hard work. You do whatever it takes to upskill and improve your situation,” Sumit reflected.

After completing his bachelor’s in computer engineering, he realized that traditional software engineering wasn’t his calling.

“I didn’t want to be stuck in a box, just coding all day. I wanted to blend technology with business,” he explained.

That realization led him to pursue a Master’s in Information Systems at Syracuse University, where he specialized in data science. His first break in analytics came with an internship at Lifetiles in New York City, and from there, his career in marketing analytics, business intelligence, and experimentation took off.

A/B Testing: Common pitfalls and how to avoid them

One of the main topics of discussion was the hidden pitfalls of A/B testing and how companies often get it wrong. According to Sumit, there are two major failure points:

1. Biases in Experimentation

Many teams make the mistake of stopping tests too early before collecting enough data, leading to false positives or negatives.
 “If you decide to run a test for four weeks, don’t stop it early just because results aren’t appearing in two days,” Sumit cautioned.

He emphasized the importance of guardrails—metrics that signal when an experiment is hurting performance so it can be stopped early if needed.
 

2. Executive Pressure and Misinterpretation of Data

Often, executives want quick results and push teams to act on inconclusive data. The problem? Rushed conclusions lead to flawed decision-making.
 “At Dropbox, even a 0.5% conversion change could mean millions of dollars. You can’t afford to jump to conclusions.”

    A/B testing for SEO and marketing

    A/B testing in SEO and marketing differs significantly from traditional product experimentation. Google penalizes duplicate content, making standard A/B testing nearly impossible. Instead, pre-post testing is used.

    “For SEO, we cluster similar pages and compare performance before and after changes rather than running simultaneous versions,” Sumit explained.

    Even in social media, A/B testing needs to be optimized. He found that overloading Instagram posts with hashtags (more than 5-7) actually reduces reach, as the algorithm flags it as spam.

    A/B Testing Pitfalls – Interview w/ Sumit Gupta @ Notion

    A/B testing in startups vs. large tech companies

    Another key discussion point was the differences in A/B testing between startups and large companies.

    • Startups: A/B tests are usually managed by a small, centralized team, reducing conflicts.
    • Enterprises: Multiple teams often run conflicting experiments on the same pages, skewing results.

    At Dropbox, this issue became so significant that they introduced an internal experimentation directory.

    “We needed a shared repository so teams could check what experiments were running and avoid overlapping tests.”

    Attribution modeling and measuring long-term impact

    Understanding how marketing spend translates to revenue is crucial. This is where multi-touch attribution models come in.

    • Last-Touch Attribution: The final interaction before conversion gets 100% credit (e.g., a LinkedIn ad).
    • Linear Attribution: Spreads credit across all customer touchpoints before conversion.

    “Marketers need to know which channel is actually driving sales, not just which one appeared last.”

    Sumit also warned against relying solely on short-term experimental results:

    “SEO experiments take months to show impact. If you only look at short-term gains, you’ll cut strategies that could yield massive long-term benefits.”

    The role of AI in experimentation

    As AI becomes more advanced, its role in A/B testing is expanding. AI can help:

    1. Generate smarter hypotheses
      • AI can analyze past experiments to recommend which tests are worth running.
    1. Run multi-armed bandit tests
      • Instead of evenly splitting traffic, AI dynamically allocates more traffic to the better-performing variant in real-time.
    1. Predict the long-term business impact
      • AI can analyze past data to forecast the downstream effects of an experiment.

    “AI will help reduce wasted experimentation by identifying trends humans might miss,” Sumit predicted.

    However, he also issued a caution:

    “AI is increasing productivity, but it’s also making us dumber. We’re losing the ability to think through problems ourselves.”

    A/B Testing Pitfalls – Interview w/ Sumit Gupta @ Notion

    Building a strong experimentation team

    When hiring for A/B testing and experimentation roles, Sumit looks for a balance of:

    • Statistical expertise (understanding p-values, Bayesian inference)
    • Strategic thinking (choosing the right experiments to run)
    • Data engineering skills (knowing how to clean and analyze data)
    • Strong communication (being able to explain results to non-technical stakeholders)

    “You can be a statistics wizard, but if you can’t communicate your findings, your experiment is worthless.”

    One of the best books he recommends for improving A/B testing skills is Trustworthy Online Controlled Experiments by Ronny Kohavi, an industry leader in experimentation at Microsoft and Airbnb.

    For improving data visualization, he swears by Cole Nussbaumer Knaflic’s Storytelling with Data.

    “That book changed my career. It taught me how to present data in a way that drives action.”

    Final advice: Focus on high-impact experiments

    As we wrapped up the discussion, I asked Sumit for one key piece of advice for companies looking to improve their experimentation.

    “Don’t try to run 50 experiments a quarter. Five well-planned experiments will give you better insights. Focus on the ones that drive revenue, not just vanity metrics.”

    His emphasis was clear: experimentation should be a revenue generator, not just a support function.

    Closing thoughts

    This conversation with Sumit Gupta was a masterclass in A/B testing, data-driven decision-making, and the future of AI-driven experimentation. From common mistakes to AI-powered strategies, the insights shared in this episode provide an invaluable roadmap for companies looking to improve their data experimentation practices.

    As AI continues to reshape analytics, companies must balance automation with human intuition, ensuring that AI enhances decision-making rather than replacing critical thinking.

    🎥 Sumit’s Educational Videos on Instagram

    📖 Recommended Reading:

    If you’re looking to level up your experimentation strategy, this episode is a must-listen. Thanks again to Sumit Gupta for sharing his expertise, and stay tuned for more cutting-edge insights on AI, data science, and business intelligence.

    Join us as we continue to explore the cutting-edge of AI and data science with leading experts in the field. Subscribe to the AI Think Tank Podcast on YouTube. Would you like to join the show as a live attendee and interact with guests? Contact Us

    Leave a Reply

    Your email address will not be published. Required fields are marked *