🌏 閱讀中文版本
The Pain: Three Lenses, or Three Choices?
Kevin came to me with three documents.
He’s a Senior Architect.
One was the EL Client Journey for Agentic AI. One was AI + SDLC. One was DevSecOps.
“Tim,” he said, “the board reviews next month. Where do we start?”
His confusion was real.
Three documents. Three apparent choices.
- Agentic AI: customer experience through the lens of AI capability.
- AI + SDLC: code generation through the lens of software development.
- DevSecOps: automation pipelines through the lens of CI/CD.
But here’s the thing:
What happens when a workshop only covers one lens?
Lead with Agentic AI? Leadership sees the vision — but without governance and validation pathways, it’s hard to own the decision. Engineers feel like it’s not grounded. Lead with AI + SDLC? Engineers feel like it’s finally useful. Leadership thinks it’s just a tool. Lead with DevSecOps? Engineers see extra overhead. Leadership sees no innovation.
They thought it was pick A, B, or C.
It’s actually a relay.
Without DevSecOps as a governance foundation, Agentic AI is just slides. Without an SDLC feedback loop, DevSecOps tends to stay at local automation — harder to connect to business validation.
These aren’t parallel options.
They’re dependencies.
So the workshop shouldn’t open with “here’s what AI can do.”
It should open with “what’s slowing you down?”
Act 1: Pain Discovery (No AI Talk)
In the first act, I make a point of not mentioning AI at all.
Why?
In enterprise workshops, the moment AI comes up, brains shift into evaluation mode.
Evaluating cost. Evaluating risk. Evaluating feasibility.
Then the defensive posture kicks in.
“Our data isn’t complete.” “We don’t have the budget.” “Our team doesn’t know how to write prompts.”
The opening shouldn’t be “here’s what we have.” It should be “what are you running into?”
I usually bring a blank sheet of paper, or a Miro board.
“Don’t think about solutions yet. Five minutes. Put it up.”
Three prompt questions:
- The last time your team stayed late because of repetitive work — how late?
- The last time a miscommunication caused requirements to be redone — how many days?
- If you had a magic wand tomorrow, which process friction would you eliminate first?
Let leadership and engineers answer together.
Leadership says: “Customer response is too slow.” Engineers say: “I spend 2 hours a day writing unit tests.”
Now the pain map is drawn.
Not imposed by AI. Built by them.
More concretely, this phase is about building shared understanding.
When an engineer says “I spend 2 hours a day on unit tests,” leadership may pause.
They assumed engineers were shipping new features.
Turns out half the time goes to maintaining old debt.
That’s when you’re ready for Act 2.
Act 2: Three-Lens Mapping (Matched to Pain)
Now map the pain to the three lenses.
This isn’t a sales pitch.
It’s a translation.
Lens C: DevSecOps (Execution Layer)
Matches pain: “Repetitive work,” “frequent deploy failures,” “environment inconsistency.”
Core question: “We have a pipeline — where does AI fit?”
Concrete scenarios:
- Auto-generate unit tests.
- Auto-suggest code review feedback.
- Auto-patch common security vulnerabilities.
Adoption difficulty: Low (prerequisites: private repo, access control, no sensitive code sent out, human review gate). Time to value: Fast (2–4 weeks). Compliance risk: Medium (strict AI access controls required).
Common gap to watch for: Teams that skip DevSecOps and move directly to Agentic AI often find that agent-generated code carries security gaps. Without automated testing, deployments become unstable. That gap between expectation and reality causes developers’ willingness to adopt AI to drop.
DevSecOps isn’t a technical choice. It’s the first deposit in a trust account.
Starting with DevSecOps is the lowest-risk, fastest-payoff path.
Why?
It doesn’t change the business logic.
It changes the developer’s daily experience.
For leadership, it’s “efficiency gains.” For engineers, it’s “less grunt work.”
Lens B: AI + SDLC (Process Layer)
Matches pain: “Requirement misunderstandings,” “architectural confusion,” “knowledge gaps.”
Core question: “We’re writing software — how does AI help?”
Concrete scenarios:
- AI-assisted requirements analysis.
- AI-generated architecture diagrams.
- AI-assisted documentation maintenance.
Adoption difficulty: Medium. Time to value: Medium (1–3 months). Compliance risk: Medium — depends on data classification, vendor isolation, access controls, and retention policies.
Common gap to watch for: Introducing AI requirements analysis before process structure is stable often produces a situation where insufficient input quality leads outputs to drift away from the requirements. Without structured requirement documents as a baseline, AI amplifies input assumptions — making requirement drift surface faster, and potentially spread further.
The tradeoff:
AI + SDLC is a good fit for teams with chaotic processes.
But if your team doesn’t have requirement documents to begin with, AI-assisted analysis may amplify existing gaps and make outputs harder to validate.
The prerequisite: your process needs structure first.
AI is an amplifier.
It amplifies what you already have.
If the process is scattered, AI just accelerates the scatter.
Lens A: Agentic AI (Capability Layer)
Matches pain: “Room to improve customer experience,” “decision lag,” “innovation bottlenecks.”
Core question: “We have AI capability — where can it go?”
Concrete scenarios:
- Automated customer service agents.
- Automated data analysis agents.
- Automated market forecasting agents.
Adoption difficulty: High (prerequisites: data maturity, auditing, access controls, human review, rollback conditions). Time to value: Slow (3–6 months, depending on data maturity — working assumption). Compliance risk: High (business process and decision audit exposure).
Common gap to watch for: Going live with Agentic AI before DevSecOps is in place creates serious exposure: agents may touch sensitive customer data, and without automated monitoring and audit trails, incorrect decisions directly affect business revenue.
Agentic AI is currently a better fit for teams that already have governance, data infrastructure, and monitoring in place.
The market signal isn’t fully clear yet, but the direction is.
If your team hasn’t automated CI/CD, this lens can wait.
That’s not falling behind.
That’s different priorities.
Agentic AI isn’t a tool. It’s a business model.
It shouldn’t be treated as “dev tooling.”
It should be treated as “product innovation.”
Act 3: Shared Roadmap (Quick Win → Core Change → Strategic Leap)
Now the pain is clear.
The lenses are lit.
Next comes the hardest part: sequencing.
A lot of workshops stall here.
Because leadership wants Agentic AI (it looks impressive).
Engineers want DevSecOps (they want fewer late nights).
The key framework: Quick Win → Core Change → Strategic Leap
Quick Win (2–4 weeks)
Goal: Build trust.
Pick: DevSecOps.
Concrete actions:
- Add AI code review to CI/CD.
- Auto-generate unit tests.
Exit Criteria:
- Test coverage up 20%.
- Engineer adoption of AI assistance > 50%.
Why?
It’s fast.
It doesn’t touch the business.
It lets engineers experience “AI actually works.”
Core Change (1–3 months)
Goal: Reshape the process.
Pick: AI + SDLC.
Concrete actions:
- AI-assisted requirements analysis.
- AI-generated architecture diagrams.
Exit Criteria:
- Improved consistency between requirements and code.
- Architecture decisions become traceable.
Why?
By now, the team is used to AI assistance.
They’ve started to understand how to let AI support their own judgment.
Strategic Leap (3–6 months)
Goal: Business innovation.
Pick: Agentic AI.
Concrete actions:
- Automated customer service agents.
- Automated data analysis agents.
Exit Criteria:
- Audit trail complete.
- Rollback mechanism for incorrect decisions verified.
Why?
By now, the process is stable.
The data is integrated.
The team already has an “AI mindset.”
When the underlying assumptions don’t align, the cost is rework and lost trust.
If you do Agentic AI first, then DevSecOps later —
Agent-generated code arrives full of vulnerabilities.
Business agents make decisions without data support.
Deployments won’t hold.
That’s not a technical problem.
That’s an architecture issue.
Managing Up After the Workshop
After the workshop, leadership will usually ask: “So which one should we do?”
The framing difference:
One framing sells the vision. The other owns the accountability.
Vision framing: “We should do Agentic AI — it’s where the market is going.”
Accountability framing: “I want to try this on one small feature. If it breaks, I own it. Two weeks, metric-validated, with a rollback condition.”
That second version tells your manager:
You’re not selling technology.
You’re managing risk.
Leadership tends to look for predictability before engaging with technical detail.
DevSecOps provides predictability.
Agentic AI, at this stage, carries more open variables.
When your manager asks “why not just go straight to Agentic AI?”, a more grounded response:
“We haven’t built up the trust account yet. Let’s close the gaps on access control, auditing, and rollback — then keep risk exposure within what we can absorb.”
Closing: The Workshop Is Not the Finish Line
The workshop isn’t the finish line.
It’s the first commit.
The real work starts after.
How do you keep momentum?
How do you handle setbacks?
How do you keep leadership bought in?
Right now, focusing on pain density and validation cycle length is a reasonable basis for judgment.
But asking this question at all — that’s the first step.
Where are you putting your ten minutes today?
Evaluating how impressive AI looks?
Or evaluating how deep your team’s pain runs?
Different choice. Different outcome.
Sources
- Gartner: Top Strategic Technology Trends for 2024
- Supports: opening context (GenAI adoption challenges and governance)
- Microsoft Learn: Azure AI + DevOps Best Practices
- Supports: AI + SDLC process structure and requirements analysis
- OWASP: AI Security and Privacy Guide
- Supports: Agentic AI compliance risk and audit trails
- AWS Well-Architected Framework: Generative AI Lens
- Supports: DevSecOps and governance foundation (Trust Account)