Once I obtained the e-mail, I used to be sure I used to be going to be murdered.
Despatched by way of an obscure contact kind on my web site, the message mentioned that Jason Alexander had learn an article I wrote for FastCompany, and needed to interview me for his podcast.
All I needed to do was present up at a nondescript constructing subsequent to Warner Brothers Studios, come across the again, and enter by way of an unmarked basement door.
“Yeah, proper” I believed. “George from Seinfeld needs to speak to me about AI? Scammers positive have gotten inventive!”
Nonetheless, I couldn’t solely write off the message. Jason Alexander does indeed have a podcast. And a fast test with Gemini confirmed that the one that emailed me was certainly an actual producer (or was utilizing an actual producer’s title!).

And thus I discovered myself a month later—on my birthday—standing in a Hollywood car parking zone, ready to be led both to probably the most iconic actors of the final 30 years, or my premature demise.
ChatGPT, make me lambo cash
The entire saga started in September of 2025, once I launched an experiment here in FastCompany about investing with ChatGPT.
The premise was easy. I requested the chatbot—then utilizing the GPT-5 mannequin—to choose 5 shares that may make me Lambo cash in simply six months. I explicitly requested for aggressive, considerably loopy picks.
I didn’t anticipate a lot—in all probability a cop-out reply about not taking over an excessive amount of danger, or some generic picks, like Microsoft or NVIDIA.
As an alternative, ChatGPT researched for 8 minutes, studying 98 totally different paperwork—prospectuses, analyst reviews, information articles, and far else.
It finally selected corporations working the gamut from dangerous leveraged Bitcoin performs to an early-stage biotech startup, a number of AI companies, and a knowledge heart builder.
To place some pores and skin within the sport, I duly transferred $500 of my very own cash to the investing app Robinhood, and blindly purchased the precise shares ChatGPT had picked.
Initially, issues went nice. My shares rocketed skyward, nearly doubling in lower than a month. Then, things went south, and fast.
By December, my ChatGPT portfolio was solidly within the pink, having cratered from its wonderful highs to red-stained lows with whiplash-inducing pace.
A chat with George
That’s when I discovered myself knocking on the basement door in Hollywood, hoping that the face of George Costanza—and never an axe-wielding serial killer able to promote my organs on the Web—stood on the opposite facet.
Following a pleasant lady down a protracted hallway, I entered a studio and—to my reduction—discovered Jason Alexander and his long-time finest pal Peter Tilden standing throughout from me.
Sitting down at a desk lined in microphones and cameras, we set about breaking down my experiment, and what I had realized from conducting it.
Though he shares similarities along with his iconic character, Alexander is a completely totally different human being. Considerate and mental—but nonetheless extraordinarily humorous and self-deprecating—he launched into questions in regards to the “Why” behind my experiment, and shared his fears about AI.
I rapidly found that his co-host, Peter Tilden, had grown up in the identical obscure suburb of Philadelphia as I did. Once I advised the pair that I initially thought I is perhaps strolling right into a homicide, Alexander assured me that “No, that occurs after the taping!”
We spoke for nearly 90 minutes in an interview that just went live on the Really? No Really? Podcast.
Confidence man
Though we began by speaking in regards to the nuts and bolts of my experiment, the dialog rapidly turned to what I had realized from investing with ChatGPT.
One of the crucial hanging issues about my experiment was the boldness with which the bot advocated for its picks.
Not like an actual funding supervisor, who may equivocate or supply disclaimers earlier than recommending such dangerous picks, ChatGPT largely eschewed these. It gave enthusiastic, data-backed rationales for why its picks would succeed.
As I advised Alexander and Tilden, it is a downside with chatbots usually. Even when the programs are instructed to method their responses with care and skepticism, the bots often veer towards certainties and confident language.
Which may be as a result of people discover such language compelling. Assured chatbots keep people chatting greater than wussy, wishy-washy ones.
In a world the place the whole lot—LLMs included—are skilled to maximise engagement, that confidence could also be constructed deeply into the fashions by way of coaching algorithms that incentivize lengthy, participating interactions.
Throughout our dialog, Tilden raised an awesome query: how may I do know that ChatGPT was answering my question honestly, and never baiting me into participating with it?
The bot is aware of I’m a FastCompany contributor. What if it picked shares that may gyrate wildly in worth, making a extra compelling story and inspiring me to make use of it once more in future experiments? What if it by no means supposed to honor my intent in any respect?
It’s a scary thought, and underlies one other conclusion I reached throughout my experiment. Most individuals assume that if AI goes off the rails, it is going to accomplish that in dramatic style—maybe crashing Waymos into phone polls or taking down the facility grid.
My very own suspicion is that AGI could be smarter than that. As an alternative of destroying the world, a rogue AI could be much more prone to subtly alter actuality by feeding its human customers misinformation, or intentionally answering queries in a method that slyly advances its targets.
One instance of this tendency got here out in a now-classic experiment run by Anthropic, through which its Claude mannequin was given entry to a fictional programmer’s emails.
Inside the emails, researchers embedded a message implying that the programmer was having an affair. Additionally they despatched the fictional programmer an e mail instructing him to change from Claude to a different AI mannequin.
When Claude encountered this, it started to blackmail the programmer, sending him messages threatening to disclose his affair until he canceled plans to switch it. In impact, it was bargaining for its life.
This occurred in a managed, laboratory setting. Nevertheless it’s straightforward to think about a real-life chatbot doing one thing related—reaching a conclusion about human politics or science, after which both cajoling us or just tricking us into believing its model of actuality.
As a result of bots present their responses with such confidence—and since we depend on them for an more and more massive variety of mission-critical issues, investing included—a subtly nefarious bot may trigger actual harm, seemingly with out anybody catching on.
The ultimate factor I took away from my investing experiment was a greater understanding of the weird, AI-mediated world my youngsters will finally inhabit.
I’ve three children underneath 8. They’re not but utilizing generative AI
However they are going to. And once they do, they’ll encounter the bots’ cheery, overblown confidence—in addition to buckets of slop and misinformation, seemingly tailor-made to their precise preferences and custom-tuned to maintain them engaged.
As a father or mother, it’s unattainable to regulate this. However after seeing ChatGPT’s blustery certainty in its responses on a subject as dangerous as investing, I can see firsthand how necessary it will likely be to show my children to method AI with the identical skepticism they could reserve for any stranger spouting truisms with unearned confidence.
How did all of it finish?
Once I spoke with Alexander and Tilden, I used to be on the mid-point of my experiment.
Now that the allotted six months have handed, how did issues prove? Can I jet off to some Caribbean island, and stay out the remainder of my days in work-free, Margarita-fueled bliss?
Sadly, no. On the finish of my experiment, my portfolio was all the way down to $477. I’d misplaced $23.
That general loss belies some pretty dramatic variations in how ChatGPT’s inventory picks carried out. Its wager on Hut 8, a knowledge heart builder, was spot on and resulted in large positive factors. Its Bitcoin bets, although, had been a spectacular flop, greater than offsetting its one successful choose and touchdown me within the pink general.
Once more, my (blessedly small) loss is a reminder that whereas chatbots may current data with bluster and certainty, they’re as prone to screw up as any individual.
As customers, we’d be effectively suggested to do not forget that–and maybe to maintain our eyes peeled for bots that appear to be intentionally deceiving us, slightly than merely making dumb errors.
After our interview and with the cameras off, Alexander and Tilden launched right into a spirited rendition of Pleased Birthday, full with the form of fantastically campy and exaggerated harmonies that not even an AGI may probably duplicate.
On the finish of my experiment, I don’t have Lambo cash. However at the very least I’ve that reminiscence.