The Question
Specifications say many things. They also stay silent in dangerous places. The architect must recognize what the document does not know how to say.
Why the future of architecture still depends on human judgment
The future is not a contest between human and machine. It is a test of whether we can still see what the machine was never asked to see.
That sentence may sound strange outside semiconductor design. But after many years in memory architecture, I have seen it happen.
Timing can close. Simulation can pass. A review meeting can end without objection. The report can look clean.
And still, the design can fail — because the wrong question was asked at the beginning, or because a constraint that mattered lived outside the file everyone trusted.
This is why my reflection on AI does not begin with productivity. It begins with a quieter question: when answers become faster, who remains responsible for asking whether the question was right?
That is no longer a distant idea. EDA vendors are building AI assistants, optimization engines, and agentic workflows into semiconductor design environments. Some focus on information retrieval and script guidance. Some optimize PPA. Others begin to orchestrate multi-tool workflows across design, verification, and sign-off.
I do not question this direction. I welcome it.
My question is narrower and more uncomfortable: as tools become better at execution, are we also improving the human process that defines the problem?
AI-assisted EDA is shifting from isolated productivity features toward broader design-flow assistance: generative copilots, AI-driven PPA exploration, and early agentic workflows.
The technical opportunity is real. The architectural question is whether faster execution also comes with better problem framing.
Give a tool known inputs, a defined objective, and a measurable optimization target, and it can move with impressive speed.
But architecture often becomes difficult before the objective is clean. The real work is not always solving inside the box. Sometimes it is discovering that the box was drawn around the wrong boundary.
AI accelerates bounded execution. Architecture often begins when the boundary itself is uncertain.
In HBM, chiplet, and heterogeneous integration, a difficult constraint may be distributed across several domains.
A thermal assumption may sit with the package team. A power-integrity margin may depend on TSV distribution. A timing closure result may depend on workload behavior that was not part of the original local model. A mechanical or assembly constraint may appear late enough to make an earlier electrical assumption stale.
In advanced integration, thermal and power integrity are not separate checklist items. They often become cross-domain bottlenecks, where package structure, TSV distribution, workload behavior, timing margin, and physical layout begin to constrain one another.
This is exactly why AI can accelerate many checks, but the architect still has to ask whether the right domains were connected in the first place.
Imagine an architecture team using an AI-assisted constraint propagation tool to check interface assumptions across a complex die-to-die connection in a 3D-stacked design.
The tool runs correctly. It sees the input file. It finds no violations. The sign-off proceeds.
Three weeks later, during system bring-up, a timing violation appears under a thermal loading condition that had been found by another team after the tool input was frozen.
The tool did not fail. It answered the question it was asked. The failure was in the process around it: no one had asked whether the input itself was still alive.
A simulation does not know what was forgotten.
A timing report does not know whether the constraint is still alive.
A tool does not feel the cost of a wrong assumption becoming silicon.
This is the kind of failure AI will not automatically prevent. In some cases, it may make the failure harder to notice, because the answer looks cleaner than the question deserves.
Analog and mixed-signal design taught me a similar lesson.
A model can be clean while the physical world remains untidy. PVT variation, noise coupling, supply droop, layout-dependent parasitics — these are not just parameters. They are reminders that reality always has a texture that the model only approximates.
The architect's job is not to distrust models. It is to understand what kind of reality the model has chosen to ignore.
That understanding is not merely technical knowledge. It is physical intuition: the sense that a clean log can still hide a dirty boundary condition.
I am not most concerned that AI will replace architects. My deeper concern is quieter.
If AI removes too much of the tedious work too early, young engineers may also lose the slow friction through which judgment is formed.
Traversal was slow. It was sometimes boring. It was often inefficient. But it forced the engineer to see the shape of the problem before receiving the conclusion.
When that path disappears, speed increases. But calibration may not follow.
AI will become part of architecture work. I welcome that. But coexistence cannot mean surrendering the parts of the work that carry consequence.
Specifications say many things. They also stay silent in dangerous places. The architect must recognize what the document does not know how to say.
Every interface contract is an assumption made physical. Voltage, timing, reset, thermal behavior — once signed off, they become consequences.
Architecture is partly the art of preserving options that do not look valuable yet. A future team often pays for what today's team closed too early.
I do not believe the right answer is to resist AI. I also do not believe the right answer is to worship it.
The better path is to let AI accelerate what can be accelerated, while becoming more deliberate about what must remain accountable.
That means teaching young engineers not only how to ask a tool, but how to test the question. It means preserving enough friction to build judgment. It means writing down not only what was decided, but why it was decided.
The future architect must learn two things at once: how to ask AI better questions, and how to remain responsible when the answer becomes silicon.
For me, this is the role I still want to keep practicing: to stand at the end of every architecture decision and ask, calmly and honestly, whether the question was drawn correctly before the answer becomes silicon.
AI makes it harder to hide in execution. Whether that is a threat or a clarification depends on what we were doing there.