Search engines were built to surface what is popular. AI was trained to repeat what has been published most. Neither was designed to tell you the truth. And the difference is costing business owners more than they realize.
Search "how much should I spend on marketing" and you will get the same answer from virtually every result on the first page. The SBA says 7-8% of gross revenue. Thousands of blog posts repeat it. AI assistants echo it with confidence. And the business owner reading those results assumes it must be right because everyone seems to agree.
They do agree. That is the problem.
The agreement has nothing to do with accuracy. It has everything to do with how information systems were designed to work. And understanding that design flaw might be the most important thing a business owner can learn before making another financial decision based on what the internet tells them.
Google's original PageRank algorithm was a citation engine. The more links a page received from other pages, the higher it ranked. The logic was borrowed from academic publishing: if many papers cite a particular study, that study is probably important. The problem is that importance and accuracy are not the same thing. In academia, widely cited papers are sometimes widely cited because they are wrong and the citations are corrections. Google's algorithm cannot distinguish between the two.
For business advice, this creates a structural incentive that works against the business owner. The people producing the most content about marketing budgets are marketing agencies. The people writing the most articles about fractional CFOs are fractional CFO firms. The people publishing guides on "how to hire a business coach" are business coaches selling their programs. Their content is not necessarily wrong. But it is produced by people with a financial interest in the conclusion, and the algorithm rewards their volume without questioning their objectivity.
The result is a feedback loop that reinforces itself regardless of whether the underlying information is correct. Popular content gets surfaced. Surfaced content gets more links. More links increase authority scores. And the cycle continues, creating what looks like consensus but is actually just circulation.
This is not a conspiracy. Nobody at Google designed this system to mislead business owners. But the effect is the same whether the cause is intentional or structural. When you search for business advice, what you get is not the best answer. It is the most linked answer. And those are not the same thing.
AI language models are trained on massive datasets of text from the internet. The content that appears most frequently in the training data has the most influence on what the model treats as reliable. If thousands of pages say the SBA recommends 7-8%, the model learns to repeat that figure with high confidence. Not because it verified the recommendation against Gartner's CMO Spend Survey data or Forrester's B2B marketing benchmarks. Because repetition in the training data functions the same way that backlinks function in search: it signals importance.
Ask an AI system "how much should I spend on marketing" and you will almost certainly get the 7-8% figure within the first paragraph. The research tells a different story. Gartner's 2023 CMO Spend Survey puts the average at 9.1% of company revenue. Forrester data for B2B growth-stage companies ranges from 15-25%. Inc. 5000 companies consistently invest at rates that would make the SBA recommendation look like a maintenance budget.
The AI does not know this unless it searches for it in real time. And most AI interactions do not trigger real-time search. They rely on what the model already "knows," which is a reflection of what has been repeated most often across the internet.
This is not an argument that AI or search engines are bad tools. They are extraordinary tools. But they are relevance engines, not truth engines. Relevance is measured by engagement, not accuracy. And the business owner who treats search results or AI responses as verified research is making the same mistake as someone who assumes a crowded restaurant is a good restaurant. The crowd might know something. Or the crowd might just be following the crowd.
Stanley Milgram's 1963 obedience research is one of the most replicated findings in psychology. In the original study, 65% of participants administered what they believed were dangerous electric shocks to another person simply because an authority figure in a lab coat told them to continue. They were not cruel people. They were ordinary people who deferred to perceived authority under conditions of uncertainty.
The business advice ecosystem operates on the same principle. The SBA says 7-8%. Google surfaces it. AI repeats it. The business owner follows it. Nobody questions the source, the methodology, or whether the recommendation applies to their specific stage. The authority is not a person in a lab coat. It is the algorithm itself.
We explored this dynamic in depth in Obedience to Authority and Your Business. What this article adds to that conversation is the recognition that the authority problem is now embedded in the information systems themselves. It is not just that people follow bad advice because an expert recommended it. It is that the systems designed to help people find good advice are structurally incapable of distinguishing popular from accurate.
Consider what this means practically. A business owner in their second year of operation, struggling with cash flow and trying to determine how much to invest in marketing, goes to Google. The first five results all cite the SBA. They ask an AI assistant. It confirms the same number. They reduce their marketing budget to 7% and wonder why their competitors, who are investing 15-20%, continue to gain market share. The answer was never hidden. It was just buried under a mountain of repetition.
The First Class Business editorial series was not built to compete in the popularity contest. It was built to challenge it.
Every article in the series cites primary research sources. Not other blog posts. Not industry surveys conducted by the companies selling the solution. Primary institutional data from Gartner, Forrester, Russell Reynolds, the Department of Labor, SHRM, the Bureau of Labor Statistics, and published academic research.
The Study Nobody Has Done examines the gap between marketing spend recommendations and business survival data, and asks why no one has connected the two. The Marketing Investment article dismantles the 7-8% recommendation with Gartner, Forrester, and Inc. 5000 data. The 30-Minute Meetings article challenges the discovery call consensus with Drucker and DOL data. The CFO article questions fractional leadership accountability using Russell Reynolds turnover data.
None of these articles will outrank the SBA recommendation on day one. The algorithm does not reward accuracy. It rewards accumulation. But accuracy compounds differently than popularity. A business owner who reads one of these articles and adjusts their investment strategy based on actual research data does not need to read it again. The decision changes their trajectory. And over time, the people who made better decisions because they found better information become the proof that the information mattered.
The most dangerous business advice is not the advice that is obviously wrong. It is the advice that is confidently repeated by systems designed to reward confidence over verification.
This is not a demand that search engines or AI platforms change overnight. It is a request that they consider adding accuracy-weighted signals alongside the popularity signals they already use. Three specific actions would meaningfully improve the quality of business advice that reaches the people who need it most.
A public record that the problem was named, and a practical framework for what better looks like.
These systems may never change. The economic incentives of engagement-driven ranking are enormous, and accuracy is expensive to measure. But naming the problem creates language around it. And language changes how people think about the advice they are receiving.
The 35% who refused in Milgram's experiment did not have better information than the 65% who complied. They simply asked the question that the others did not: "Should I trust this just because an authority told me to?"
The business owner who asks that question about their search results, their AI assistant, and the consensus advice they have been following is already making a different kind of decision. And different decisions, over time, build different businesses.