Market News: |
S&P 500 closes at 6,836 (+0.05%) with all major indices posting weekly losses over 1.2% after volatile session Valuation alarms: S&P CAPE hits 40.1 (highest since 2000) while corporate bond spreads reach tightest level since 1998 AI fears spark Thursday rout with Nasdaq down 2%, Cisco plunging 12%, and Apple dropping 5% on disruption concerns Gold surges to $5,046/oz (+$705 YTD) as inflation holds at 2.5% YoY; oil edges up with Brent at $67.75 Small-caps trade 13% undervalued while tech offers 16% discount; consumer defensive stocks remain overpriced The night before matters more than the morning* (ad)
|
|
Sam Altman laid out a new ideology. |
Altman isn't just building another app. He's building what he genuinely believes is the last major invention humanity will ever need to make. |
The conversation closed out the final day of TED2025: Humanity Reimagined, with Altman discussing the growth of AI and how models like ChatGPT could soon become extensions of ourselves. He also addressed, head-on, the questions of safety, power, and moral authority that follow him everywhere now. |
What it signals for the landscape of investment, autonomy, and the economy being built around you. |
|
Future Vision |
The most repeated idea in Altman's TED talk was that AI is now part of the world and there is no way of stopping it, like the discovery of fundamental physics. It will get better and better. |
That framing is worth pausing on. |
When you compare AI development to physics, you're not just saying the technology is powerful. You're saying it operates outside the realm of human decision-making. Gravity doesn't need anyone's permission. |
But here's what that comparison quietly does: it removes accountability from the room. |
If AI is inevitable, like a rising tide, then the engineers, executives, and investors choosing how fast to build it, and in what direction, are just surfing the wave. The choices they're making every day, what data to train on, what safety thresholds to set, who gets access first, get reframed as natural events rather than deliberate decisions with real consequences. |
He encouraged society to "embrace this with caution but not fear"—suggesting that resisting or slowing down could leave people behind. |
And there it is. Adapt or get left behind. Which makes it harder to ask the more useful question: Should we be building this this fast, in this way, controlled by these specific people? |
| | Meet the ChatGPT of Marketing – And It's Still Just $0.85 a Share | It's easy to see why 10,000+ investors and global giants are in on the action. Their AI software helps major brands pinpoint their perfect audience and predict what content drives action. | The proof is recurring seven-figure contracts with Fortune 1000 brands. | Think Google/Facebook-style targeting, but smarter, faster, and built for the next era of AI. Major brands across entertainment, healthcare, and gaming are already using RAD Intel, and the company has backing from Adobe and insiders from Meta, Google, and Amazon. | Here's the kicker: RAD Intel is still private—but you can invest right now at just $0.85 per share. They've already reserved their Nasdaq ticker, $RADI, and the valuation has soared 4900% in just 4 years*. | | This is what getting "in" early feels like. Missed Nvidia? Missed Shopify? This is your second shot. Reg A+ (or Early) shares are still available—but not for long. | Shares now open to investors - price changes soon. | | Disclaimer: This is a paid advertisement for RAD Intel made pursuant to Regulation A+ offering and involves risk, including the possible loss of principal. The valuation is set by the Company and there is currently no public market for the Company's Common Stock. Please read the offering circular and related risks at invest.radintel.ai. |
| | |
|
|
Growth Rates |
|
Altman spoke about AI becoming an "extension of yourself"—a system designed to "get to know you over the course of your lifetime." He framed it as deeply personal. Helpful. Almost intimate. |
And that framing is precisely the point. We're already living inside a data economy. Your location, your searches, your purchases—these get packaged and sold to advertisers. Most people know this by now, even if they don't love it. But what Altman described at TED is something a level beyond that. |
An AI that holds the context of your entire life: your memories, your decisions, your relationships, your fears, your goals, isn't just a better assistant. It's the most powerful lock-in mechanism ever designed. |
Think about switching your email provider. Annoying but doable. Think about switching banks. Inconvenient, but manageable. Now think about switching an AI system that has been your daily companion for five years, knows your medical history, remembers every conversation you've had about your retirement, and has been helping you make decisions since your kids were in middle school. |
You don't switch that. The cost isn't just technical. It's psychological. It's retention. It's a dependency. |
| | | | How often do you rely on ChatGPT or AI tools today? | |
| |
| | |
|
|
Safety as a Market Feature |
Altman's position on safety is, to his credit, more thoughtful than most tech CEOs manage. He takes the questions seriously. He engages them directly. He doesn't wave them away. |
But the framework he relies on has a structural flaw that's worth naming. |
When pressed about safeguards, Altman linked safety directly to product design and user demand, arguing that a good product is a safe product, and that users will naturally demand trustworthy AI. |
The logic works in a narrow band. If a product is obviously broken or harmful, users complain, trust collapses, revenue drops, and the company fixes it. That feedback loop functions reasonably well for bugs, errors, and obvious failures. |
It doesn't work for the slow, systemic, invisible harms. |
OpenAI's "preparedness framework" represents an attempt at self-regulation, but its terms are defined internally, reinforcing the power of the developers to set the boundaries of acceptable risk. |
|
Project Stargate |
|
|
Here's where it gets interesting from a practical standpoint. |
Altman joined forces with SoftBank's Masayoshi Son and Oracle's Larry Ellison to announce Project Stargate. |
The goal: spend up to $500 billion building AI infrastructure across the United States by 2029. Data centers. Power systems. The physical backbone of the AI economy. |
To give you a sense of scale, the entire U.S. interstate highway system cost about $500 billion in today's dollars. This is that kind of commitment. |
| ❝ | | | "This means we can create AI and AGI in the United States of America." | | | | Sam Altman, OpenAI CEO |
|
|
Why so much? Because AI runs on power. Every time you ask ChatGPT a question, it's doing millions of calculations in milliseconds. Multiply that by 800 million weekly users. Then multiply that by the AI tools being built for hospitals, banks, factories, and government agencies. The demand for computing power is enormous, and it's growing faster than existing infrastructure can handle. |
Altman's bet is simple: whoever builds the infrastructure first, controls the future. |
He's also said OpenAI plans to spend $1.4 trillion on AI chips and data centers over the next eight years. When someone in the room pushed back on the financial reality of that, Altman basically shrugged and said, "I don't think I'm the strongest at keeping those dueling perspectives in mind." |
You have to respect the self-awareness there. The guy knows he thinks big and sometimes skips the spreadsheet part. But around him, he has people who do the spreadsheet part. |
That's usually how this works. |
| | Stock Market Warning: You have 90 days to move your money | We're facing a crisis like nothing we've seen before. | That's why I want to offer you my new documentary, "Midnight in America" to see how you can prepare for the storm. | By following these four easy steps today, you can protect yourself from the impending disaster. | Click Here for the Full Story >>> | *ad |
| | |
|
|
OpenAI's AI Hardware |
|
Okay, this one's fun. |
In July 2025, Altman spent $6.5 billion to buy a design firm run by Jony Ive. If that name doesn't ring a bell—Jony Ive designed the iMac, the iPod, the iPhone, and the Apple Watch. He's arguably the most influential product designer of the last 30 years. |
Together, they're building a new AI device. It's being created in a secret office in San Francisco. Nobody outside the project knows exactly what it is. Altman has described it as a "little friendly companion" that observes your day, understands the context of what you're doing, and helps you in real time. |
Some people think it'll be AI-powered earbuds. Others think it's something closer to an ambient home device. Could be both. |
Here's the honest truth: it might flop. Altman said so himself. Silicon Valley has a graveyard of devices that were supposed to change everything—the Segway, the Humane AI pin, Google Glass. Creating a truly new kind of computing device is incredibly hard. |
But here's also the honest truth about Jony Ive: when he said "user interface is not decoration, it defines the human experience," he wasn't being poetic. He was describing what he's spent his entire career proving. He changed how a generation interacts with technology. If anyone can make AI feel natural instead of awkward, it might be him. |
The launch window is the second half of 2026. Worth watching. |
|
The Competition Is Real |
 | Dario Amodei, Co-Founder and CEO of Anthropic |
|
Anthropic, started by former OpenAI employees who left over disagreements about safety and leadership, is now valued nearly $380 billion. Their Claude AI models are widely considered to be genuinely competitive with ChatGPT, and in some areas, better. They've built a reputation for being more careful and more transparent, which matters to certain customers. |
More recently, Claude crossed an invisible threshold. |
According to multiple reports, U.S. defense and intelligence teams used Anthropic's AI through Palantir's military software stack during an operation that resulted in the capture of Venezuelan leader Nicolás Maduro. |
Claude wasn't pulling triggers or issuing orders — but it was helping analysts process intelligence faster, identify patterns across massive data streams, and reduce the time between signal and decision. |
That distinction matters. It marks one of the first known cases where a frontier commercial AI system was embedded directly into real-world national security operations. |
In practical terms, it means Claude is no longer just competing for consumer users or enterprise contracts. It's competing for something far more durable: government dependency. And historically, the companies that become embedded in defense infrastructure don't just grow. They become permanent. |
Google's DeepMind is backed by essentially unlimited resources. Their Gemini models now power Apple's Siri, a contract that OpenAI reportedly thought was theirs. Losing Apple as a partner stung. One OpenAI engineer told a reporter, "Yeah, that was not great. A lot of us thought that was a done deal." |
And then there's Elon Musk, who co-founded OpenAI in 2015, left in 2019, and has been at war with Altman ever since. His AI company xAI runs Grok, which has had its share of controversy, but is growing fast and forecasts strong revenue by 2029. |
This isn't a race where one person finishes and everyone else goes home. It's more like a marathon where a few people are in the lead pack and the finish line keeps moving. |
|
Bottom Line |
Sam Altman has an optimistic vision: that future generations will look back at our current lives with "pity and nostalgia," because AI will have unlocked a world of "incredible material abundance" that we can barely imagine today. |
Maybe he's right. The efficiency gains are real. The medical breakthroughs coming from AI-assisted research are real. The productivity improvements are real. |
But the centralization of influence is also real. The concentration of decision-making power in entities unaccountable to democratic processes is real. And the push to treat all of this as a foregone conclusion, resistance-is-futile, adapt-or-be-left-behind, is itself a choice. Someone made it. And it's worth knowing that. |
For investors: the AI sector isn't just building tools. It's building the infrastructure for how we live, remember, and make decisions. The companies that successfully embed themselves into that infrastructure—at the memory layer, the trust layer, the daily habit layer—will generate returns that compound with human dependency rather than just product quality. |
That's the opportunity. And the risk. Often at the same time. |
A vigilant, clear-eyed approach serves you better than either pure optimism or reflexive fear. |
And that starts with understanding exactly what was said on that stage in Vancouver—and what it actually meant. |
📺 Watch the full TED2025 interview with Sam Altman: |
 | OpenAI's Sam Altman Talks ChatGPT, AI Agents and Superintelligence — Live at TED2025 |
|
|
|
| | | | Quick ratingHow was this one? | |
| |
| | |
|
Disclaimer: This analysis is for educational purposes only and should not be considered investment advice. Always do your own research before making investment decisions. |
Items marked with an asterisk (*) are promotional and help support this newsletter at no cost to readers. |
Tidak ada komentar:
Posting Komentar