Interview with Raj Sundaram
Sr. Director of Product
This is a senior director with 15+ years at Meta, HP, Broadcom, and Agilent. He currently leads Teradata's Agentic AI Platform — the most strategic product bet in the company. This round will be fundamentally different from Round 1 with Jefferson.
Raj Sundaram
At Meta, Raj led modernisation of their analytics-scale data warehouse into an AI-native, privacy-aware platform across WhatsApp, Instagram, Reality Labs, and GenAI products. At Teradata, he now leads the Agentic Platform — autonomous AI agents grounded in Teradata's analytics data. Education: MBA from University of Chicago + MS Mechanical Engineering from Purdue.
🧠 How He Thinks
Comes from both engineering (Purdue MS) and business strategy (Chicago MBA). Expects candidates to operate at both levels — technical enough to be credible, strategic enough to be interesting. His Meta experience means he thinks in terms of platform and scale.
🎯 What He'll Test
Strategic clarity on AI/data platforms. Your ability to discuss agentic AI seriously. Evidence you've thought about product at a systems level. He'll look past surface-level answers faster than Jefferson did — probe for depth.
🔥 Hot Buttons
Agentic AI is his life right now. Privacy and trust in AI (Meta background). Developer productivity. Platform thinking over feature thinking. Data governance at scale. Don't treat these as buzzwords — engage with the substance.
⚡ Rapport Builders
He followed Peter Yang (product leader & author) — signals interest in PM thought leadership. MBA background means he appreciates strategic framing. His Purdue engineering base means he respects technical depth. Mention his Agentic platform work specifically.
Round 1 → Round 2: The Shift in Register
Four critical mindset shifts you must make before this conversation starts.
With Jefferson, you led with your execution track record — the 38%, the 21.8%, the sprints. That's the right register for a peer PM evaluating fit. Raj does not care primarily about execution metrics. He cares whether you have a point of view on where products should go and why.
Right Register (Raj-Level) "My thesis at Enigma was that ambiguity in requirements is a product management failure before it's an engineering problem. The 38% efficiency gain was the proof point — but the real change was shifting the team's mental model from 'build fast' to 'validate early.' At Teradata's scale, that same principle applies to AI feature development where wrong assumptions are far more expensive to unwind."
Notice: same facts, completely different lens. Raj wants the so what for the world, not just for your team.
Raj likely knows you spoke with Jefferson. One of the most powerful things you can do is reference that conversation to establish continuity — it signals you're serious, you retained everything, and you're not starting from zero.
This does three things: shows preparation, gives Raj something to react to, and positions you as someone who thinks in platform relationships — exactly his frame.
At Meta, Raj led modernisation of a petabyte-scale analytics data warehouse into an AI-native, privacy-aware platform used by WhatsApp, Instagram, Reality Labs, and GenAI products. This background gives him three strong intuitions he will apply to your answers:
2. Privacy intuition: At Meta, data trust and governance were existential. When AI topics come up, he'll listen for whether you treat governance as an afterthought or a first-class product requirement.
3. Platform intuition: Meta doesn't build features — they build platforms that enable features. He'll listen for whether you think in systems or in individual features.
When in doubt, ask yourself: "Does my answer sound like it would work at 10x the scale I described it at?" If yes, you're in Raj's register. If no, reframe.
Raj has 15+ years. You have 2.5. That's not a problem you can hide — it's a dynamic you have to address directly if it comes up, because a Sr. Director will respect honesty and a specific plan far more than bravado or deflection.
Strategic Thinking Questions
Raj will probe your ability to reason at the product strategy level. Answers that work for Jefferson won't cut it here — you need a clear point of view.
"The fundamental difference is who your primary customer is. A feature product is built for end users who want a specific job done. A platform product is built for builders — developers, data scientists, product teams — who will use your platform to serve their own end users."
Platform PM: Optimises for developer/builder adoption and ecosystem health. Success = builders build successfully on your platform and their products thrive. Metrics: API call volume, time-to-first-integration, ecosystem breadth, developer retention.
Critical difference in prioritisation: On a feature, you can ship opinionated UX. On a platform, your design decisions become constraints for everyone who builds on top. Every choice has multiplied consequences — so you must be far more conservative about breaking changes, far more deliberate about abstractions, and far more focused on backward compatibility and API design than a feature PM ever needs to be.
"Teradata's Agentic Platform is the clearest example of this: you're not building an AI agent — you're building the platform that lets AI agents reason about enterprise data reliably. The decisions you make at the platform level — what context is accessible, how trust is established, what governance hooks exist — determine what's even possible for every agent built on top."
"Three thesis statements for 3 years from now:"
2. Data trust and AI governance become existential differentiators. We'll see one or two high-profile enterprise AI failures — wrong decisions made by ungoverned agents — that create a massive market demand for "trusted AI." Companies with Teradata's governance lineage are uniquely positioned. The risk is that Teradata doesn't market this aggressively enough to win the narrative.
3. The "build vs. buy" debate for AI infrastructure shifts to "integrate vs. replace." Most enterprises will not rip and replace their Teradata investments. They'll want to integrate LLMs and agents into their existing Vantage ecosystem. Teradata's winning move is making that integration frictionless — better than building from scratch on a hyperscaler.
What that means for Teradata: The Agentic Platform Raj is building isn't a side bet — it's the future core value proposition. ClearScape Analytics becomes the analytical engine that agents trust. VantageCloud becomes the data substrate. The question is whether Teradata makes the narrative shift fast enough from "enterprise data warehouse" to "enterprise AI operating system."
"This is ultimately a question about where your true differentiation lies and what you can defend over time."
Buy/Acquire when: The market has already validated a specific capability and you need it now, not in 18 months. Building would distract engineering from your actual moat. Teradata's Aster Data acquisition (graph analytics) is the template here — buy adjacent capability, integrate deeply.
Partner when: The capability is important but complementary, and the ecosystem wins if you don't own it. KNIME integration for no-code ML, open-source framework support (scikit-learn, XGBoost in BYOM) — these are better as partnerships because they expand the user base without building something Teradata isn't best positioned to maintain.
The Agentic AI lens: The LLM layer (GPT-4o, Claude, Gemini) — partner/integrate, don't build. The data grounding layer (how agents query and trust Teradata data) — build. The governance/audit layer — build. The agent orchestration layer — potentially partner with emerging frameworks (LangGraph, CrewAI) while owning the Teradata-specific integration.
"Developer experience (DX) is how quickly and confidently a developer can go from 'I need to solve this problem' to 'my solution is in production.' It has nothing to do with how many features the platform has — it's about friction, clarity, and trust."
Documentation as product: In enterprise contexts, the developer reading your docs is often a consultant or a data scientist under deadline. Bad docs kill adoption faster than any feature gap. Treat docs as a product with their own backlog and success metrics.
Error quality: What happens when something breaks? Cryptic stack traces destroy trust. Human-readable, actionable error messages are a DX investment with outsized ROI.
Consistent APIs: Enterprise developers will interact with Teradata across SQL, Python APIs, REST endpoints, and potentially agent frameworks. Inconsistent patterns across these create cognitive overhead that accumulates into resentment.
"Why it matters for Teradata specifically: the data scientist who struggles with Teradata's DX will advocate for Databricks in the next architecture review. The data scientist who finds it productive will defend Teradata when the CTO asks why they're not migrating to Snowflake. DX is enterprise retention, not just a nice-to-have."
"This is the core hard problem of platform product management: your product doesn't directly generate user value — it enables others to generate user value. So your north star has to be a leading indicator of downstream success, not just platform usage."
Better north stars: "Outcomes enabled" — how many production AI/ML pipelines are running successfully on VantageCloud per month? "Time to production" — how many days from first model training to a model scoring live transactions in a customer environment? These measure whether the platform is actually enabling real value downstream.
For Teradata's Agentic Platform specifically: I'd propose something like "autonomous decisions made by AI agents on Teradata data that were reviewed and validated correct by a human" — because this captures both adoption AND trustworthiness, the two things Raj's platform needs to prove simultaneously.
"The key principle: your north star should make it obvious when you've succeeded and obvious when you've failed. For a platform, that usually means measuring the success rate of the things built on top of you, not just the health of the platform itself."
"A 45-year-old enterprise company is both a constraint and an asset — and the mistake is treating it as one or the other exclusively."
The constraint: Existing customer commitments slow down radical redesign. Every architectural decision carries a compatibility burden. The sales motion is built around established value propositions — shifting that narrative takes time and internal persuasion, not just good product decisions.
The asset: Teradata has something most AI startups would pay any amount to acquire: trust. Fortune 500 banks, telcos, and retailers trust Teradata with their most sensitive data. That trust is a moat that takes decades to build. Any AI platform Teradata releases starts with enterprise credibility that Databricks or a new agentic AI startup has to earn slowly."
2. Identify capabilities where Teradata's trust and scale position it to do things no startup can — invest disproportionately there (ModelOps at millions of models, agentic AI with enterprise governance). That's where Teradata can lead.
3. Accept that some innovations are better acquired or partnered than built — and lobby internally for the speed and decisiveness to do so.
"Pricing for agentic AI is unsolved territory — most enterprise vendors are figuring it out in real time. My framework would start with value alignment: price in a way that correlates with the value the customer extracts."
Outcome-based / consumption model: Price per autonomous decision made, per query resolved, per model deployed to production. Aligns perfectly with value. Risk: hard to predict costs for customers — CFOs hate surprises.
Tiered capability model (most likely right answer for enterprise): Core VantageCloud subscription includes baseline agentic capabilities. Premium tiers unlock governance audit trails, enterprise-grade agent orchestration, more models in production, priority compute for agent workloads. This matches how enterprise software is actually bought and preserves predictability.
My recommendation: A consumption-based model with a spend cap — customers pay for what agents do, but with a ceiling they can plan against. This captures value alignment without the CFO risk.
"The biggest risk isn't Snowflake or Databricks — they've been the "Teradata killer" for a decade and Teradata is still standing. The real risk is narrative displacement."
"Enterprise buyers are now asking their CDOs 'what's your AI strategy?' — not 'what's your data warehouse strategy?' If Teradata is still primarily perceived as an enterprise data warehouse company when that question is being asked, it loses the conversation before the demo. The product can be world-class, but if the category definition excludes you, the deals go to OpenAI enterprise, Google Cloud, or Databricks by default."
"The mitigation — and what Raj's team is doing — is to lead with the agentic narrative aggressively, get reference customers in AI-native use cases, and make sure the technical community (data scientists, AI engineers) is advocating for Teradata in architecture decisions."
Agentic AI Deep Dive
This is Raj's current product. He will expect you to engage with it seriously. These questions will reveal whether you've actually thought about agentic AI or just read the headlines.
"A traditional LLM or chatbot is reactive: you give it a prompt, it returns a response. One turn, no memory, no action in the world."
"An AI agent is autonomous and action-oriented: it receives a goal, breaks it into sub-tasks, executes those tasks using available tools (query databases, call APIs, run code, send messages), evaluates its own progress, and iterates until the goal is achieved — often without human intervention at each step."
AI Agent: "Identify the top 3 factors behind Q3 revenue underperformance and draft a corrective action plan for the CFO." → The agent queries VantageCloud for revenue data, runs a statistical analysis, cross-references it with market data, identifies root causes, generates a formatted report, and emails it. No human involvement at each step.
Key agent capabilities not present in chatbots:
• Tool use (database queries, API calls, code execution)
• Memory (short-term context + long-term storage)
• Planning (breaking goals into sub-tasks)
• Self-evaluation (checking its own outputs for errors)
• Multi-step autonomous action
"Teradata's Agentic Platform enables agents to do this on enterprise data — with the governance, access controls, and audit trails that make it safe to actually deploy in a bank or telco."
2. Explainability and audit: When an agent recommends discontinuing a product line, the CFO will ask "why?" Enterprise buyers need a full reasoning trace — not just the output, but every decision, query, and inference the agent made. This is a first-class product requirement, not a nice-to-have.
3. Access control at agent speed: Traditional row-level security was designed for human query latency (seconds). An agent might fire 50 queries in milliseconds while solving a problem. The governance layer has to apply the same data access rules without creating a performance bottleneck that makes agents unusable.
4. Scope containment: Agents that can do anything are terrifying. Enterprise buyers need to define and enforce agent scope — "this agent can only read data, never write; only access finance data, not HR." Building the right scope definition primitives is a non-trivial product design problem.
5. Human-in-the-loop design: When should an agent pause and ask a human? Too often → annoying. Never → dangerous. The right interruption model is a product decision with enormous consequences.
"An agent that reasons from general web knowledge is answering 'what does the internet think about this topic?' An enterprise-grounded agent is answering 'what is actually true in this company's data right now?' — and that distinction is the entire value proposition."
Real-time: Web knowledge has a training cutoff. Enterprise databases are live. An agent querying VantageCloud directly gets Q4 results as they're being reported, not Q3 results from a cached model.
Provenance: You can trace every piece of information an enterprise agent used back to a specific table, row, and timestamp. That auditability is impossible with an ungrounded LLM. For regulatory compliance, this is non-negotiable.
Access-controlled truth: Enterprise data has permission layers. Grounded agents respect those layers by construction — a sales agent only sees data the sales team is authorised to see.
"The technical implementation: Teradata's agents likely use RAG (Retrieval-Augmented Generation) over structured SQL queries rather than over document embeddings — which is actually harder to get right, because SQL queries require the agent to understand table schemas, join relationships, and business logic. Getting this right is what Raj's team is working on."
Why bad: No defined agent scope. No failure modes. No governance model. No defined user role. No success criteria. Impossible to build from this.
User persona: Data analyst, intermediate SQL proficiency, not an ML expert. Has read access to customer transaction and CRM tables in VantageCloud.
Agent scope: Can query any table the analyst has read permission on. Cannot modify data. Cannot send external communications without analyst approval. Can run ClearScape Analytics ML functions.
Human-in-the-loop: Agent presents a plan ("I'll query these 3 tables and apply this churn model") before executing. Analyst approves or modifies. Final report is reviewed before distribution.
Failure handling: If the agent encounters a query error, it presents the error in plain language with a proposed fix, not a stack trace. If it reaches a data access boundary, it tells the analyst what it can't access and why.
Success criteria: 80% of analysts complete their weekly report in under 10 minutes. Analyst accuracy vs. manual report: >95% match. Zero unauthorised data accesses in first 90 days.
→ Use when: You need the model to answer questions based on proprietary, frequently updated data. Enterprise search, document Q&A, database-grounded agents. Teradata's agentic platform almost certainly uses this for structured data retrieval.
Fine-tuning: You update the model's weights on your specific dataset, so the model permanently learns domain-specific knowledge and style.
→ Use when: You need the model to consistently behave in a domain-specific way (e.g., always produce SQL in Teradata dialect, always format outputs for financial reports). Expensive and slower to update — don't use for dynamic knowledge, only for static behaviour patterns.
In-context learning (prompt engineering): You give the model examples and instructions in the prompt itself, without changing any weights.
→ Use when: You need fast iteration, don't have enough data to fine-tune, and the task is well-specified enough to demonstrate with examples. Best for prototyping and for tasks where the pattern is straightforward.
"For Teradata's agentic use case: RAG + in-context learning for data retrieval and query generation. Fine-tuning for domain-specific behaviors like Teradata SQL dialect adherence or industry-specific output formatting."
"Most teams will be tempted to measure usage — 'agents created', 'workflows run'. That's measuring activity, not value. I'd structure metrics across three levels:"
Agent quality: Task completion rate (% of agent runs that reach their goal without human intervention or error). Target: >70% after first 90 days, >85% by month 6. Also: time-to-goal comparison vs. manual workflow equivalent.
Trust signals: What % of agent outputs were reviewed vs. immediately acted upon? A high review rate is okay early — it shows adoption. A declining review rate over time shows trust is building. A review rate that stays at 100% indefinitely means agents aren't seen as reliable.
Safety record: Zero unauthorised data accesses. Zero incorrect outputs that made it to a business decision without human review. One of these in month 1 can undo 6 months of adoption work in enterprise.
Expansion signal: # of customers who started with one agent use case and launched a second. This is the strongest early indicator of product-market fit in enterprise.
"The 'AI replaces analysts' narrative is both misleading and actively harmful to enterprise adoption — and Teradata should position against it deliberately."
"The reality: AI agents don't replace analysts, they change what analysts do. The analyst who spent 4 hours pulling and cleaning data now reviews, validates, and acts on the agent's output in 20 minutes. That's not job replacement — that's massive capability amplification. The analyst who used to run 5 analyses per week can now run 25. Their judgment, domain knowledge, and accountability remain irreplaceable."
• "Your analysts make 5x more decisions with the same headcount"
• "Your data scientists finally spend their time on insights, not data wrangling"
• "AI agents handle the repeatable work; your team handles the judgment calls"
This positioning: (a) is accurate, (b) reduces internal political resistance to adoption, (c) speaks to C-suite ROI without triggering HR anxiety, (d) differentiates from dystopian AI narratives that make enterprise buyers nervous.
"The deeper truth: enterprise buyers who fear AI replacement will slow-walk adoption with governance reviews and stakeholder objections. Buyers who see AI as amplification move fast. The narrative you adopt is a product decision that affects your go-to-market velocity."
Leadership & Judgment Questions
A Sr. Director evaluates whether you show leadership instincts and sound judgment — not just execution. These questions test how you handle ambiguity, disagreement, and ethical complexity.
Situation: At Aasaan App, a senior leader wanted to add three new features to our B2B pitch deck based on a single large prospect's requests. The view was "if they want it, we build it."
Task: I believed we were at risk of building a custom solution for one account instead of a scalable product for our core market.
Action: Rather than just saying no, I reframed the question. I ran a quick analysis of our last 50 deals — what objections were raised, what features were requested, what actually drove conversion. The three requested features appeared in fewer than 8% of deal conversations. I presented this alongside a counter-proposal: instead of building the features, let's document them as a premium roadmap item, address this prospect's immediate needs through configuration, and run 5 more discovery calls with similar-size prospects to validate if this is a pattern before committing engineering cycles.
Result: The senior leader agreed to the discovery approach. The 5 calls revealed that 3 of the 3 features were indeed niche — we ultimately chose not to build 2 of them. One made it into the roadmap 6 months later when validated by a pattern across 10+ accounts.
"The worst response to incomplete data is paralysis — waiting for more data than you'll ever have before making a call. The second worst is pretending the uncertainty doesn't exist and making a confident call anyway."
2. Distinguish signal from noise: Do I have weak data on everything, or strong data on some things? Sometimes 20% of the data gives you 80% of the signal you need for the decision at hand.
3. Make the assumption explicit: "I'm making this decision assuming X. If X turns out to be wrong, we'll know within 30 days because of Y metric, and we'll reverse course by doing Z." This makes the uncertainty a managed risk, not a hidden one.
4. Set a review trigger: Every decision under uncertainty should have a predetermined checkpoint — when will we look at this again, and what evidence would cause us to change course?
"At Enigma, almost every decision I made was under conditions of incomplete data — we were a startup. I got comfortable being explicit about assumptions and building in review loops as a standard practice."
"I think about AI ethics not as a compliance checklist but as a product quality dimension. An AI system that makes unfair decisions, harms users, or produces unauditable outputs is simply a bad product — regardless of its accuracy metrics."
2. Proportionality: The level of human oversight required should be proportional to the stakes. AI-suggested playlist? Low stakes, full automation fine. AI-recommended procurement cut that affects 200 employees? Human review is non-negotiable.
3. Bias auditing as a continuous process: Models trained on historical data inherit historical biases. At Aasaan, my lead-scoring model had a potential bias risk toward leads from larger companies (more training data) vs. smaller companies. I ran a segment-level accuracy analysis before deployment to surface and address this. It's not a one-time exercise.
4. Accountability ownership: Someone in the organisation must own accountability for every AI decision at scale. "The model decided" is never an acceptable answer. The PM, not the model, is accountable.
"At Teradata, where agents may be influencing decisions at Fortune 500 scale, these principles aren't abstract — they're product requirements."
"I think about this as translation work — not in the condescending sense of 'simplifying for non-technical people', but in the genuine sense of finding the shared language where everyone's actual concerns overlap."
With data scientists: I'm a participant in their problem framing, not just the translator. I can discuss model tradeoffs (precision vs recall, interpretability vs accuracy) at enough depth to be useful. My ML background at Aasaan gave me credibility in these conversations that a pure business PM doesn't have.
With business executives: Business outcomes and risk. Not features, not model accuracy — what decision will this help them make faster, and what's the downside if it's wrong? I connect every technical capability to a P&L line.
"The test I use: can I explain this decision in a way that makes each stakeholder feel their concerns were understood, even if they didn't fully get what they asked for? If yes, the cross-functional relationship stays strong through disagreement."
"The honest answer is that 'ready to ship' is never a perfect state — it's a risk-adjusted judgment call. The question is whether the remaining imperfection poses acceptable risk to users and the business."
• Core use case works reliably (>95% success rate in testing)
• Known issues are documented, scoped, and have workarounds where critical
• Failure modes are visible (we'll know when something breaks) rather than silent
• The cost of not shipping (delayed learning, competitor moves, customer commitments) exceeds the cost of shipping with current known issues
Hold when:
• A known issue could cause data loss, security breach, or significant financial harm to users
• The core job-to-be-done doesn't work reliably — shipping would damage trust more than it builds it
• We haven't run the product through the user persona that will actually use it (paper testing doesn't count)
For agentic AI specifically: The bar is higher because failures are less recoverable. An agent that makes a wrong recommendation in a demo is embarrassing. An agent that autonomously takes a wrong action on enterprise data is a customer story that kills the product category. I'd hold on safety-critical paths longer than I would for a traditional software feature.
Request 30-minute 1:1s with everyone on the PM team and one engineer from each major team. My agenda: what's the most important thing you think I need to understand, and what's the thing that surprised you most when you joined? Listen, don't present.
Week 2 — Use the product as a user:
Go through ClearScape Analytics Experience end-to-end. Run the Jupyter notebook demos. Try to build a small model on actual data. Break things. Find the friction. Nothing builds product intuition faster than experiencing it as a user.
Week 3 — Shadow customer conversations:
Ask to sit in on at least 2 customer calls (even as a silent observer). Enterprise customer context — the language they use, the problems they mention, the frustrations they surface — is irreplaceable.
Week 4 — Produce something:
Draft a one-pager on something I've observed — a gap, an opportunity, a question worth asking. Share it with Jefferson and Raj and invite critique. The goal isn't to be right — it's to demonstrate I can take inputs and produce structured thinking quickly.
"A good APM executes what they're given and doesn't drop balls. A great APM changes the framing of what they're working on."
"Specifically: great APMs don't just answer 'how do I build this?' They consistently ask 'are we building the right thing?' — and they earn enough trust quickly enough that when they raise that question, people take it seriously rather than dismissing it as junior input."
2. Commercial awareness: Great APMs know how their features connect to revenue, retention, or cost. They can walk into a business review and speak to business outcomes, not just feature launches.
3. Judgment under pressure: When the release deadline is tomorrow and the feature is 70% done, a great APM knows whether to ship, hold, or scope down — and makes that call with conviction and the right stakeholders, not by defaulting to whoever has the loudest voice.
"I'd rather be caught asking the right question than answering the wrong one efficiently."
Your Resume — Through Raj's Lens
Raj will read your background differently than Jefferson did. He's evaluating potential and trajectory, not just execution. Expect more "why did you do it that way" questions than "tell me about" questions.
"The biggest limitation I ran into was the gap between model performance on test data and adoption in the product context — what I'd call the 'last mile' problem of ML deployment."
At Aasaan, my lead-scoring model was technically strong — 46% efficiency lift in controlled testing. But it sat unused for two weeks after deployment because I'd optimised for model accuracy and completely ignored the user experience layer. Sales reps saw a score on their dashboard with no context — no explanation of what drove the score, no guidance on what to do with it."
2. Built a feedback mechanism: reps could mark scores as "wrong" after a call, which fed back into model retraining.
3. Ran a working session with the sales team to show the model's logic on familiar accounts — building trust through transparency, not just accuracy.
"The lesson: ML model performance and ML product performance are different metrics. Optimising for accuracy without optimising for adoption is shipping a model that doesn't actually exist in the product."
B2B e-commerce SaaS (Aasaan): ROI-driven decisions, but with a human element. The buyer is often the founder or a small team — they feel the product personally. They'll forgive UX friction if the core value prop works. Churn often happens not because the product is bad, but because the user never got to the "aha moment" — onboarding is the most critical PM problem.
B2B Enterprise SaaS (Enigma, now Teradata): Multi-stakeholder, risk-averse, relationship-driven. The "user" and the "buyer" are rarely the same person. A data scientist loves the product; the CISO worries about governance; the CFO questions the contract terms. You're selling to a committee and serving individuals. Switching costs are high — which means wins compound, but so do losses.
"At Teradata, this enterprise psychology matters enormously for the Agentic Platform. The data scientist will evangelize it internally if the DX is great. But the CISO and legal team will kill the deployment if governance isn't airtight. Serving both simultaneously is the product design challenge."
"The most commercially consequential decision I made was at MyCaptain — choosing to narrow the targeting strategy rather than expand it."
When I joined, the team was selling to anyone who expressed interest — broad demographic targeting, high outreach volume, low conversion. I analysed the small set of students who had completed courses and had the best outcomes. They shared a tight profile: mid-20s, career changers from specific industries, within a decision window after a triggering event (job loss, promotion ceiling, new year)."
I recommended narrowing our entire acquisition and sales strategy to this segment and accepting lower volume in exchange for higher win rate. It was a contrarian recommendation — saying no to apparent demand is uncomfortable."
Result: Conversion rate went from 3% to 7% in 60 days. Revenue grew from ₹2L to ₹10L over 4 months. The team validated product-market fit in a segment they could actually serve well, rather than diluting across segments they served poorly."
"I'd have invested earlier in a proper product analytics infrastructure. We relied heavily on qualitative signals — user interviews, sales call notes, support tickets — because we didn't have the event tracking instrumentation to quantify user behaviour in the product itself."
"Every prioritisation decision I made at Enigma was directionally right but would have been far more defensible — and far more accurate — with product usage data behind it. I knew what users said in interviews; I didn't know what they actually did in the product minute-to-minute."
"What I'd do differently: in the first month, before working on any feature, I'd instrument the core user journeys with events — every click, every action, every drop-off. The time investment would have paid back tenfold in the quality of every subsequent decision."
"This is actually one of the things I'm most excited about at Teradata — the data infrastructure exists to make those analytics decisions precisely, not approximately."
"Pega CDH is essentially a real-time decisioning engine — it takes customer signals, applies adaptive ML models, and produces next-best-action recommendations at millisecond speed. At Enigma, I used it to build our churn prediction model that scored at-risk accounts before revenue impact appeared."
1. Decision explainability: CDH decisions come with propensity scores and contributing factor breakdowns. The output isn't just "this customer will churn" — it's "this customer will churn (score: 0.82) because: declining login frequency (-0.3), support tickets this month (+0.2), payment delay (+0.15)." That transparency model is exactly what I'd want for every agentic AI decision at Teradata.
2. Adaptive learning in production: CDH models update in real-time as new behavior comes in — they don't wait for a monthly retrain. Agentic AI systems need similar online learning capabilities.
3. Action orchestration: CDH doesn't just score — it triggers actions: email, CSM alert, product nudge. That action orchestration model is one of the templates for building effective agentic workflows.
"I've been building a native Android app called FlowState — it's a personal life management system I'm developing because I couldn't find an existing app that solved the specific problem I had: building sustainable habits across multiple life dimensions simultaneously without cognitive overload."
"What makes it interesting from a PM perspective: I wrote a full PRD before writing a line of code. The PRD is grounded in neuroscience research on habit formation, and it covers six distinct modules — punctuality, wellness, social media management, smoking harm reduction, impulsive spending, and task management. Each module has its own user psychology model and success metric."
"I'm also running a personal infrastructure stack — local AI models through Ollama, n8n for automation, Portainer for container management. I do this because I believe PMs who use the technologies they're building products around have fundamentally better product instincts than PMs who only read about them."
"The honest answer is: no, I haven't worked at that scale, and I won't pretend the gap doesn't exist. I haven't designed for billions of queries per day. I haven't dealt with the governance complexity of a Meta-scale privacy-aware data platform."
"What I can offer is this: the foundational PM skills — understanding users deeply, translating between technical and business stakeholders, making defensible prioritisation decisions, measuring what matters — don't fundamentally change at scale. The vectors of difficulty change. The complexity of stakeholder management, the consequence of architectural decisions, the rigour of testing — all amplify."
"What I'd bring to that environment isn't a pretence of experience I don't have. It's a set of working instincts I've battle-tested in fast-moving, resource-constrained contexts — and genuine hunger to learn the scale-specific craft from people who've operated at Meta and HP level. That's a trade I'd make enthusiastically."
2. Own the user feedback loop: Customer call notes, support tickets, NPS verbatims, forum discussions — senior PMs want signals from this, not the raw volume. I'd build and maintain a structured insight pipeline that surfaces the most important themes weekly.
3. Draft first — let them edit: PRDs, user stories, one-pagers — I'd produce v1 so Jefferson and others are reacting and refining, not starting from blank. An hour of a senior PM's editing time produces better output than 3 hours of generation time.
4. Run the product demo machine: ClearScape demos, customer presentations, internal roadmap decks — these are high-effort, time-consuming, and learnable. I'd take these off the senior PM's plate within 60 days.
5. Be a second set of eyes on their blind spots: As the person closest to the ground level, I'll hear things from user conversations that senior PMs don't. I'd make sure those signals reach the right people without noise.
Meta-Style Rapid-Fire Questions
Raj's Meta background means he may use compressed, high-signal questions designed to get to your instincts fast. These need crisp, confident answers — not long preambles.
"The single principle I keep coming back to: users don't know what they want, but they always know what's frustrating them. My job is to listen hard to the frustration, resist the solution they offer, and find the underlying job-to-be-done."
"This sounds simple. In practice it's the hardest discipline in PM. The instinct to say yes to user requests — especially if they're from an important customer — is powerful. Resisting it requires confidence in the problem definition that you only get from deep user research and a clear north star. Everything else in my product process exists to serve that discipline."
"I'd look for the feature that has the highest maintenance cost relative to its actual usage. Not its stated importance — its actual usage."
"In most products there are features that were important at the time they were built, that have loud internal advocates, but that actual users have largely stopped using. These are the features that consume engineering time in every release ('we have to make sure it doesn't break') without contributing to the north star metric."
"The kill criteria: usage below X% of active users for 6+ months AND no clear roadmap reason why that changes AND the removal creates engineering headroom that funds something with higher ROI. That combination makes the business case undeniable."
"A year ago I believed that LLM accuracy was the binding constraint on AI adoption in enterprise. Build a more accurate model, get faster adoption. I've updated significantly on this."
"The binding constraint isn't accuracy — it's trust architecture. We already have models accurate enough to be useful in most enterprise contexts. What enterprises don't have is the governance framework, the audit trail, the human oversight model, and the internal accountability structure to deploy them confidently."
"This is why Teradata's approach — grounding agents in governed, auditable, structured enterprise data — is more strategically important than building a marginally better model. The problem isn't the model. The problem is everything around the model."
Product: Notion (or your actual daily tool — adapt authentically)
Change: "Notion's biggest friction for me is the absence of a reliable 'capture and organise later' mode. When I have a half-formed idea or an important piece of information, the optimal action is to capture it instantly without deciding where it goes. Currently, anything I add to Notion requires me to make a location decision at the moment of capture — which creates enough friction that I skip Notion entirely and use my phone's notes app for the initial capture."
"What I'd build: an AI-powered global inbox that takes anything — a voice note, a clipped web page, a typed thought — with zero structure decisions required. Once daily, it surfaces the captured items and suggests where they might live based on your existing Notion structure. Capture now, organise when you have the context. I believe this would change Notion from a 'deliberate workspace' tool to an 'always-on thinking partner.' The north star metric: daily active captures per user."
"You haven't asked me what excites me most about the specific problem your team is working on — the Agentic AI Platform. And I think the answer matters, because someone who's excited about data warehousing and someone who's excited about autonomous AI agents are going to show up very differently to this work."
"What excites me most: the governance problem. Everyone's building AI agents. Very few people are building them in a way that a bank's legal team would actually sign off on. The work of making agents trustworthy — explainable, auditable, scope-contained — is the hard, unresolved product problem. It's the difference between AI that's a demo and AI that's a product. That's what I want to work on."
Your Questions to Ask Raj
Raj will judge the quality of your questions as much as your answers. Generic questions about culture or growth will signal you're not at his level. These are senior-director-appropriate questions.
This is the most powerful question you can ask Raj. It invites him to talk about what he actually thinks about, at the level he actually operates. It signals you're ready to engage at that level too. His answer will almost certainly be more valuable than anything in this guide, and it will make the conversation memorable for him.
You've now interviewed with both Jefferson (ClearScape Analytics PM) and Raj (Agentic AI Platform). Asking how the two products relate demonstrates you're thinking about Teradata as a system, not just two separate roles. It also shows you're already thinking like a PM who needs to understand the roadmap context, not just the immediate scope of your role.
His answer will tell you a lot about internal alignment and where the company is actually heading — information you'll need to make a good decision if you get an offer.
This question does double duty: it's a genuine rapport builder (Raj gets to reflect on his own experience) and it's self-serving information gathering for you (his answer will tell you what the real learning curve looks like if you join). It also shows intellectual curiosity about his journey, not just about the job title.
Bonus: it subtly acknowledges the experience gap without making it awkward — you're asking him to teach you something, which is exactly the posture an APM should have with a Sr. Director.
This is a different version of the "what does success look like" question — but asked in a way that's personal and direct to Raj, not generic. His answer becomes your success criteria if you get the job. It also shows you're thinking about accountability and outcomes from the first conversation, not just getting through the interview.
Ask this as your final question. It's the cleanest way to close — it leaves Raj thinking about you in the role, not just in the interview seat.
Metrics Cheat Sheet + Pre-Interview Brief
Memorise these cold. Raj may not ask about them directly, but they should be woven naturally into every relevant answer.
› Read Raj's LinkedIn activity for any recent posts (his follower count is 500+ — he may post occasionally)
› Re-read Teradata's Agentic AI platform announcements: search "Teradata Autonomous AI Knowledge Platform 2025"
› Read about Teradata's AI Factory launch (announced recently per LinkedIn)
› Know the difference: ClearScape Analytics (Jefferson's product) vs. Agentic AI Platform (Raj's product)
› Prepare your "platform vs feature PM" answer — he will probe this
› Know what RAG, ModelOps, BYOM, and model drift mean fluently
› Have a genuine point of view on agentic AI risks — not just benefits
› Open by referencing your Round 1 conversation with Jefferson — this signals seriousness
› Your experience gap is real — own it with a specific reframe, don't hide it
› Ask Q·01 (his hardest unsolved problem) — it will define the conversation