How will Artificial Intelligence Solve the Infinite Spam Problem?
We think about artificial intelligence through the lens of the Gartner Hype Cycle. It's a useful mental model in bubble conditions, which are what we have now.
As a refresher (we've covered it before), there are five stages to the cycle as shown below.
The Gartner Hype Cycle applies to technology breakthroughs that ultimately succeed, not the ones that fail and disappear. A failed technology breakthrough will thus never see a "slope of enlightenment" following the "trough of disillusionment." Instead it just goes away and never comes back.
Then, too, the more genuine the prospects for a technological breakthrough, the more likely it is to follow the Hype Cycle pattern.
Technology breakthroughs that are "real," in other words, are more likely to produce an initial bubble (in terms of inflated expectations) because the underlying realness becomes a foundation on which to build, or wildly overbuild, based on euphoria taken too far.
Take railroads in the mid-19th century... or radio stocks in the 1920s... or the late 1990s dot-com bubble. Those technology breakthroughs were real, not fake, with substantial long-run payoffs.
But it was precisely the realness of the technology, coupled with Noah's Ark liquidity conditions, that produced near-term bubbles in each case. Investors getting ahead of themselves is precisely the dynamic that the Hype Cycle elegantly captures.
Meanwhile the "trough of disillusionment" is about more than just inflated expectations.
It's about the inevitable thorny problems that arise in trying to monetize a new technology (bringing in actual profits) and the dawning realization that, not only were initial expectations taken too far, a host of newly discovered problems will take time and effort to solve — perhaps a whole lot of effort and a substantial period of time — before the tech is truly working smoothly at scale.
Here and now we are seeing the Gartner Hype Cycle play out perfectly with another wildly hyped industry, electric vehicles (EVs).
At the peak of inflated expectations in 2020, investors couldn't get enough of the EV story. It was all grand vision and huge opportunity — never mind the fact there was no way on earth all of the EV companies that got funding, at insane valuations to boot, could compete and survive.
Then the problems kicked in as exemplified by Hertz, the rental car company brought back from the dead by meme stock investors.
After its Lazarus-like return to health, the new management at Hertz decided to bet the farm on EV adoption — which turned out to be a disaster because EV technology, on the whole, just wasn't ready. There were still too many problems to solve: Not enough charging stations; repair costs that were way too high; plummeting resale values; and so on and so forth.
Artificial intelligence will inevitably go through the same cycle, in our view.
Just as EVs had a "peak of inflated expectations' ' point in the cycle where hype and pollyanna optimism overwhelmed all rational concerns, A.I. is going through a comparable moment (and may have already passed the peak).
And just as EVs are now barreling headlong into the "trough of disillusionment" like a greased pig on a waterslide, so too will A.I. follow a comparable sentiment path when the speculative gloss wears off and the thorny industry problems remain.
One of the most serious challenges A.I. may face is what we call "the infinite spam problem."
The purveyors of artificial intelligence at scale will have to solve this problem, which comes in multiple forms, lest artificial intelligence destroy itself as a business model, while harming or destroying multiple other business models in its wake.
The infinite spam problem comes in three variants (and possibly more):
The more that A.I. floods the web with auto-generated content, the less incentive humans have to continue making original content. The scale of this problem is illustrated by a new submission limit imposed by Amazon for self-published e-books: A maximum of three per day. Not per month or week, mind you, per day. One can only imagine A.I.-powered knock-off farms trying to submit 50 e-books per e-day, filled with ChatGPT blather, on every subject imaginable. The same applies to human-curated websites. Why put care and effort into your website on gardening or classic car restoration if there are 10,000 A.I.-generated knockoffs bleeding away your traffic?
Large Language Models (LLMs) need human-generated content for training purposes — an LLM "learns" by digesting oceans of such content. But A.I.-generated content is a kind of poison for LLM training efforts because it magnifies errors. Imagine making a Xerox copy of a document that was copied from another copy; with each additional copy, flaws multiply and expand. An LLM that is trained on A.I.-generated content thus eventually reverts to gibberish and starts "hallucinating" (creating nonsense results) on a higher frequency basis. The data-poisoning problem relates to the content flooding problem; the more that the web becomes composed of A.I.-generated content as opposed to human content, the more the LLMs will cannibalize themselves and go off the rails.
Imagine this scenario: Frank, a business traveler, uses a designated credit card on business trips. The credit card company sells Frank's transaction data in bulk (along with millions of other card holders) to an A.I.-powered marketing firm. The marketing firm, in turn, uses Frank's transaction profile (while keeping his identification masked) to alert various consumer product companies that Frank is a good prospect. Because A.I. can write customized pitches in limitless numbers and dash them out in real time as soon as new transaction data comes in, these companies can start emailing Frank in real time. If he buys some roses for his wife in a flower shop on the way home, he gets 10 emails relating to floral arrangements and romantic gifts, and so on. A.I.-charged marketers find so many opportunities, they collectively start sending Frank 200 emails per week. Every week. All of them are customized... and all of them get deleted.
Artificial Intelligence, at the end of the day, is nowhere near "intelligent." It uses eye-watering amounts of computing power to scrape statistical samples from the web.
In some ways A.I. is the ultimate mimic — a champion at creating "best of' type compilations that synthesize material that is already out there. As such A.I. is only as smart as the body of human content that it draws from, and only as creative as the human sources that it pulls from.
But the temptation — which is already happening, think about Amazon's three-per-day e-book limit — will be to use the near-instant content generation capabilities of A.I. to create not actual content, but spam... stuff that looks like content and feels like content, at first, but is ultimately just spam, cranked out at the speed of electrons.
Hence why we call it "the infinite spam problem."
Infinite spam is what A.I. in its present form looks suited to create.
And this whole thing with large language models will continue to get smarter: What if that is entirely a lie, sort of like the decade-old lie that self-driving cars were "just around the corner"?
Maybe LLMs will keep getting smarter. But maybe their training efforts are running into a brick wall of data poison brought about by the flood of A.I.-generated content that enterprising copycats are already creating.
(In the time you spend reading this piece, we would wager hundreds of new Amazon e-books have likely been created, nearly all of them rip-offs from web-scraped source material.)
And when it comes to customized marketing efforts, A.I.-generated opportunities may function like the ring of power from "Lord of the Rings": An irresistible source of power that destroys those who wield it.
The marketing problem goes back to Aswath Damodaran's observation that "if everybody has it, nobody has it."
If A.I.-generated marketing content has the ability to be so customized and so real-time that everyone in the path of such marketing gets an overwhelming tsunami of emails, texts, automated voice calls, and so on, the flood of A.I.-powered smart marketing will itself turn into high-class spam... and for the recipient of such a firehose, the answer may be to simply ignore all of it completely.
Last but not least, when it comes to looming troubles for the A.I. narrative, the infinite spam problem intersects with the expensive energy problem and the cutthroat competition problem.
We have said before that A.I. may turn out to be a terrible business model for the large-scale platform players, who have to compete in A.I. whether they like it or not.
We have already seen this with self-driving cars, an industry where at least $100 billion has been spent with no profits yet to show for it. (When self-driving is in fact figured out, probably with urban city centers as the primary use case, we wager it will be a gritty, low-margin business involving fleets and maintenance centers owned by the city.)
But in the truly ugly scenario, A.I. is not just a low-margin proposition on the whole, it is a money-losing one, because of the impact of the infinite spam problem on other forms of advertising and marketing.
Imagine general web searches becoming almost useless — because it is almost all spam — even as LLM progress has stalled out and electricity costs have skyrocketed by way of ChatGPT-like services taxing America's 50-year old electricity grid.
Who wins in such a scenario? Not the big boys, who will be fighting each other tooth and nail for slim-margins on a high-cost product (via all the energy and computing power that goes into A.I. usage).
Perhaps not the marketers either, who wind up spam-flooding each other with infinite A.I. offer generation to the point that the value of all marketing efforts declines sharply.
We don't know what the A.I. slope of enlightenment will look like, or how these issues will be figured out.
But we have strong conviction that an A.I. trough of disillusionment is bearing down — think EV levels of disappointment or worse — and it's going to be a doozy.
Until next time,
Justice Clark Litle Chief Research Officer, TradeSmith
TradeSmith is not registered as an investment adviser and operates under the publishers' exemption of the Investment Advisers Act of 1940. The investments and strategies discussed in TradeSmith's content do not constitute personalized investment advice. Any trading or investment decisions you take are in reliance on your own analysis and judgment and not in reliance on TradeSmith. There are risks inherent in investing and past investment performance is not indicative of future results.
Tidak ada komentar:
Posting Komentar