
A candid discussion on AI trust from Income School's Nathan and Julia puts the spotlight on a risk that AI stock multiples have ignored: user skepticism.
Income School founders Nathan and Julia are publicly working through their trust issues with AI and large language models, shifting the conversation from what the technology can do to whether it can function as a reliable thought-partner. The discussion, which emerged from their latest content, lands at a moment when AI optimism has been priced into software stocks with almost no discount for user skepticism.
For traders who have ridden the AI wave in names like Microsoft and Alphabet, the timing matters. The Income School audience consists of content creators, course builders, and online business owners–exactly the early-adopter cohort that software companies need to convert into paying AI subscribers. If that group begins to question the output quality or trustworthiness of LLMs, the adoption curve that underpins forward revenue estimates gets flatter.
The core of the discussion is not about whether AI can generate text, images, or code. It is about whether the output can be trusted enough to replace human judgment in a business workflow. Nathan and Julia are drawing a line between using AI as a thought-partner–a tool that accelerates ideation and editing–and treating it as a thought-leader that makes decisions independently. That distinction has direct implications for the pricing power of AI software products.
When a user trusts an AI copilot, they are willing to pay a recurring subscription and integrate it into daily operations. When trust erodes, the tool becomes a novelty that gets audited line-by-line, reducing the time savings that justify the fee. The Income School conversation surfaces a risk that has been largely absent from sell-side models: that user skepticism could cap net revenue retention for AI-powered SaaS platforms long before the technology matures.
Microsoft has embedded Copilot across Office 365 and Azure. Adobe has Firefly. Salesforce has Einstein GPT. Each of these rollouts assumes that enterprise users will adopt AI features at a pace that justifies the premium pricing tiers. The stocks have been rewarded accordingly. Microsoft trades at a forward earnings multiple that embeds a significant AI contribution, and the broader software sector has seen multiple expansion tied to the generative AI narrative.
A trust deficit changes the math. If users spend extra time verifying AI outputs, the productivity gain shrinks. If they limit AI to low-stakes tasks, the addressable market narrows. Neither scenario is priced into current valuations. The Income School discussion is a real-world signal that the user experience is not matching the investor narrative, and that gap tends to close through price rather than through upward revisions.
The immediate read-through is for the AI software names that have led the rally. A rotation out of pure-play AI hype and into companies with tangible, non-AI revenue streams becomes more likely if trust concerns spread beyond the creator economy. The AI-Powered Fraud Surge Creates New Demand for Cybersecurity Stocks already showed that AI adoption creates parallel demand for verification and security tools. A trust problem in LLMs could accelerate that shift, benefiting companies that audit, filter, or authenticate AI outputs rather than those that simply generate them.
There is also a read-through for the India consumer AI thesis, where Venture Capital Bets on India Consumer AI: The 3-5 Year Risk Window rests on mass adoption by price-sensitive users. If trust is a barrier for sophisticated U.S. creators, it will be a larger hurdle in markets where the cost of a bad AI decision hits harder relative to income.
The next decision point for the AI trade comes during the upcoming earnings cycle, when software companies will report net retention rates and AI attach rates for the first full quarter since the initial hype wave. Any sign that adoption is lagging the narrative will put a spotlight on the trust issue that Income School just brought into the open. For now, the conversation is a reminder that AI multiples have priced in flawless adoption, and the first cracks in that assumption are appearing from the users themselves.
Drafted by the AlphaScala research model and grounded in primary market data – live prices, fundamentals, SEC filings, hedge-fund holdings, and insider activity. Each story is checked against AlphaScala publishing rules before release. Educational coverage, not personalized advice.