The Illusion of Intelligence: Why AI Feels Smarter Than It Is
- Tikona Capital

- Jul 2
- 4 min read
Updated: Jul 22

As an equity research analyst, I’m trained to look at trends, dig into data, and question narratives. So when it comes to artificial intelligence, particularly generative AI like ChatGPT, I try to move beyond the hype and ask the deeper questions: What is it really doing? Where is the value being created? And most importantly, what are we not seeing clearly? Recently, I came across an amazing essay by science fiction writer Ted Chiang, published in The New Yorker, titled “ChatGPT Is a Blurry JPEG of the Web.” Chiang, known for his short stories like Story of Your Life (which inspired the film Arrival), has a knack for bringing deep philosophical questions into sharp focus. His essay doesn’t hype AI, nor does it dismiss it. It gives us a metaphor: ChatGPT is a blurry image of the internet, not a new creation, but a compressed, lossy version of what’s already there. This metaphor has significant implications not just for AI itself, but for how we think about knowledge, originality, and even the future of industries built on information.
Why AI Seems to Understand Us
Imagine you had a digital photo of the entire internet. Now imagine saving it repeatedly with some data lost each time, a process known in tech as “lossy compression.” That’s what LLMs are doing. They take in massive amounts of text from the web, compress it into statistical patterns, and then reconstruct responses from those patterns. But they don’t retain the full image. Some nuance, precision, and truth inevitably get lost in the blur. And yet, strangely, it is this very imperfection that makes us believe AI understands us. When ChatGPT paraphrases a fact instead of quoting it, we think, “Ah, this machine has comprehension.” When it responds like a student writing in their own words, we attribute insight where there is only mimicry. This illusion is powerful. But illusions can be dangerous if we mistake them for reality.
A Better Search Engine, or Just a Louder Echo?
Investing, at its core, is a decision-making process based on clarity, not confidence.
Now imagine this process being increasingly driven by AI-generated summaries. A ChatGPT-like system pulls data from hundreds of sources analyst reports, earnings calls, blog posts, and gives you a simplified answer: "This stock looks strong based on fundamentals." But wait, whose fundamentals? Which analyst? Which assumptions? What context? This is the illusion of intelligence.
The AI didn’t analyze the company; it compressed what others said. And if its training data includes other AI outputs, the loop becomes even more fragile. You're not just reading a summary of a report. You’re reading a summary of a summary of a summary.
As Sam Altman once said: “The value of AI depends on the quality of the data it's trained on and most of the internet is noise.”
The Subtle Blur Across Sectors
While the financial world raises red flags around AI-generated summaries and decision-making, other sectors are also navigating this blurry transformation, each in their own way.
Take medicine, for instance. AI tools today can analyze radiology scans faster than human doctors, identify patterns in genomic data, and even assist in early diagnosis. These breakthroughs offer real hope, especially in underserved regions with limited medical staff. But there’s a fine line. An AI suggesting a probable illness is helpful; an AI misdiagnosing based on incomplete data is not. In healthcare, precision matters more than speed. The stakes are human lives.
Legal services face a similar dilemma. Generative AI can draft contracts, summarize case laws, and support paralegals in research. That’s a win for productivity. But when an AI, without understanding context, fabricates cases, as seen in a few high-profile courtroom incidents, it can mislead even trained professionals. A small hallucination in legal language can flip the meaning of an argument.
In education, AI has democratized learning. Students now have access to personalized tutoring, instant explanations, and feedback tools. But it also raises questions. If AI writes essays or solves problems, what happens to actual learning? Are we training minds or just training prompts?
Even in creative industries, where originality is currency, AI-generated art, music, and writing blur the lines between inspiration and imitation. While some embrace it as a collaborative tool, others see it as diluting the essence of human expression.
None of this is to suggest AI is inherently harmful. It’s powerful, and in many cases, deeply beneficial. But when it’s used without care or believed to be more capable than it is, it can silently shift the balance from informed decisions to automated assumptions.
As Elon Musk put it, “AI doesn’t hate you, but it’s also not your friend. It’s indifferent.”
What’s Worth Remembering
Ironically, the rise of AI makes human expertise more valuable, not less.
In a world where AI produces generic content, original thought becomes a moat. Analysts who can ask the right questions, spot anomalies in balance sheets, and connect dots across industries will become more important. Because what AI lacks is judgment. It can scan, summarize, and simulate. But it cannot think. That remains our edge.
Mark Zuckerberg said,
“AI will magnify what we do, not replace who we are.”
The job of the analyst, then, is not to compete with AI, but to use it without being used by it.
So What Should We Do With AI?
As professionals, creators, analysts, and thinkers, our task is not to fight AI, nor to blindly embrace it. It’s to use it wisely. To know what it is and what it isn’t.We should use AI to save time, but not to skip the hard thinking. To assist, but not to decide. To get inspired, but not to surrender authorship.
We are entering an age where artificial intelligence may indeed reshape how we think, work, and invest. The picture ahead is not crystal clear, perhaps it’s meant to be a bit blurry.
But sometimes, multiple blurry pictures, when layered, refined, and observed carefully, create a powerful story. And maybe that’s the point. Not to fear the blur. But to find meaning in the midst of it.
Ready to Start Your Investment Journey?
We bring our expertise to Mutual Fund Investing! You may download our ANDROID APP here
Or SIGNUP at: wealth.tikonacapital.com
Disclaimer: This content is for informational purposes only and does not constitute financial advice. Please consult a licensed advisor before making investment decisions.










Comments