The Foggy Horizon of Computer Science: Beyond the Illusions of AGI
“Every field must reckon with its boundaries. Computer science, for all its dazzling progress, is no exception.”
The Unspoken Assumption Behind Decades of Progress
Much of modern computer science—algorithms, architectures, and recently, machine learning—has silently ridden the back of Moore’s Law. For decades, we scaled not just codebases but the very assumption that “faster, cheaper computation” was always around the corner.
This implicit trust shaped how we engineered solutions:
- We wrote inefficient code assuming CPUs would catch up.
- We trained deeper neural networks assuming GPU memory would double.
- We delayed architectural refactors assuming cloud costs would drop.
But now, the curve is bending. Moore’s Law has slowed. Dennard scaling has stopped. And instead of cleaner abstractions, we are stacking models and complexity atop a crumbling physical foundation.
The Rise of Generative AI — A Shift, Not Salvation
Enter generative AI. Built on GPUs, guided by attention mechanisms, powered by trillions of FLOPs. This shift is not a reinvention of computing—but a rerouting. From logic-based programming to statistical brute force. From hand-crafted algorithms to probabilistic guesswork.
But this detour comes with costs:
- Energy-hungry computation
- Opaque reasoning paths
- Minimal real-time reasoning or planning capabilities
Even quantum computing—which promised a leap—still falls short on core NP-hard problems. The illusion of exponential speedup doesn’t apply to most practical scenarios.
What If This Is the End of the Road?
What if the reason AGI seems just around the corner is because we’re running in circles at the edge of what’s computable?
What if, despite billions of parameters, we’re still solving a tiny fraction of real-world problems?
This thought haunts me. Not with fear, but with a profound discomfort:
- Have we mistaken scale for progress?
- Is complexity adding value, or just entropy?
The Human Cost of Complexity
In my work—building agents, orchestrating tasks, scaling systems—I see the same pattern repeat:
- More nodes, more bugs.
- More microservices, more cognitive load.
- More frameworks, longer onboarding.
Distributed systems don’t scale human understanding. They scale failure modes. We debug partial outages like forensic detectives. We build abstractions we barely understand. We deploy, rollback, and pray.
This isn’t engineering anymore—it’s crisis choreography.
So What Is the Role of a Computer Scientist?
If we can’t make things simpler, faster, or fundamentally more powerful—what do we do?
I don’t have all the answers. But here’s what I believe:
- We need to return to fundamentals—clarity, constraints, and computation models that are explainable.
- We must optimize for human time, not just compute time.
- We need to resist the cargo cult of AGI and focus on building useful, dependable systems.
The future of computer science isn’t about chasing unicorns. It’s about accepting limitations—and engineering meaningfully within them.
Final Thought
In a world where complexity sells and simplicity is hard, I find myself searching not for breakthroughs—but for balance. Between scale and sanity. Between ambition and grounding.
Maybe the future of CS isn’t in what we can do next—but in what we choose not to do.
Continue reading
More thoughtJoin the Discussion
Share your thoughts and insights about this thought.