I went to AI4 in Vegas this Week. Here’s what I learned.
Well, first time for everything — including Julia in Vegas (this week). 😆
I went for the AI4 conference, North America’s LARGEST artificial intelligence industry event, with Jeff Joyce and Justin McGill from our Content at Scale team.
And it was INSANE.
5,000 people packed in the MGM on the Vegas strip for three days.
BIG shoutout to Daniel Lackland for giving me a speaking gig last minute, and to Chandra Stepanovich and Allison Fried and the entire AI4 team for running things. This event, with 5,000, was no small feat to put together. 🤯
I was able to attend on both a speaker and press pass as an influencer, which was a first for their event!
Takeaways from AI4 Keynotes
First up was Geoffrey Hinton’s keynote. If you don’t know, Hinton is one of the “godfathers of AI,” so I was very excited to catch this one.
On stage, Geoffrey talked about a language model back in 1985, trained on just 100 examples, and how we’ve exploded to models a billion times faster today. He heard it said once… “You’ll never change a neural net.” WRONG. We stretched the data, we learned, and in 2012, the dam broke.
Some interesting points he made…
- The change in how knowledge has stored is what has revolutionized AI
- AI now understands language better than a human
- Human memories are not stored, they are generated; just like AI. Humans hallucinate — so does AI
- There are massive near term AI risks: massive job loss, lethal autonomous weapon
- Governments need to help us stamp and recognize fake content, much like fake money
- Machines will replace nearly ALL manual labor
When he talked about AI replacing jobs, he paused, and said… “I don’t know what to do about it. It’s a dodgy time to be someone like a paralegal. I wouldn’t be.”
The entire crowd loved his raw honesty. 😆
Next up was Andrew Yang, Democratic candidate for president of the United States in 2020. I’ve been studying one of the pieces of his campaign, the Freedom Dividend, and believe it could be an aid to ending poverty — serving us better than the overly regulated welfare programs.
He talked with a ton of knowledge about AI and job displacement. He said we’re not even paying attention to the fact of how many white collar jobs are up for replacement. And that AI is the technology of the future… we’ve already automated millions of manufacturing jobs.
He believes this should and will change politics. We are graduating folks from college with degrees but the jobs that they will need to be ready for, don’t even exist yet. He described “leap over supply” and how this is a problem for the economy.
He believes that no party, Democrat or Replicate, is trying to address these issues — the mass displacement of jobs due to AI. He believes that UBI is the solution, and the reason they called it “freedom-based dividend”, was because conservatives responded well to “freedom.” Lol!
Robotics & Humanity, Healthcare, and AI Policy
Next, I caught a panel from Brendan Schulman of Boston Dynamics that was very interesting.
Brendan identified that we aren’t having enough babies… we don’t have enough people in the workforce, and will face even greater shortages soon. This matched up to the research I’ve done seeing how Amazon isn’t replacing workers, but filling gaps. He said that automotive assimilation is the first place physical robots will land, which we are seeing with Figure 02 deployed in the BMW facility in Greer.
My panel on AI-driven diagnostics was with Dr. Ahkil Saklecha and Eve Cunningham, and went very well. We talked about how machine learning models are enhancing accuracy, speed, and reliability in diagnostic procedures, and both Dr. Ahkil and Dr. Eve gave incredibly deep, well-spoken insights from their tenure in the space. I added in some humor and audience engagement. It was a great panel, I was told afterwards by healthcare execs present it was one of the best ones.
The AI Policy Summit Keynote by Congressman Jay Obernolte was interesting. He talked about how it’s much easier to train the FDA than get a whole new agency involved in AI policy. His goal is to build AI policy into the existing entity, rather than create a whole new one. He also described how the EU passed a giant, 3,000 page act, and called it a done deal… that won’t be the US. He believes (and I agree) that AI is changing so rapidly, it must be incremental. They won’t pass one bill, but dozens in the next decade.
Wait, what…God and AI???
The Mind of God, Faith, & Generative AI led by R. York Moore of the Coalition for Christian Outreach was one of the most PACKED roundtable sessions at the entire event.
🤯 Did not expect that many people to be interested in “The Mind of God… & Gen AI” — but hey, I was. So that was super cool.
I had to wait in line on the outside for 30 minutes before even getting in. I caught the last 30 minutes, and shared a word when inside — unintentionally becoming the final word. (Everyone in that room loved my StraightOuttaAI shirt! Best reactions I’ve gotten.)
R. York Moore shared some incredibly deep thoughts from a value-based perspective.
Highlights:
With LLM capabilities, we can create a Christian LLM, a Muslim LLM, a LLM for any religion. We have to remember that AI and LLMs are not inherently evil. Just like the internet has been used for evil (trafficking in pornography), doesn’t mean it is evil. AI is the same.
I loved this thought… “Generative AI was in the mind of God when he created the earth.” Beautifully said.
Moore also covered how we’re at the beginning of one of the most meaningful moments of humanity. Yet, God wasn’t missing something when he made humanity — humanity was made perfect. This new digital piece helps us recreate things that are missing from the world. Continually ask how can we create something beautiful, not evil?
He believes the power potential with AGI is huge, and we need to avoid accidentally having a Hitler scenario where someone evil works to control the world. The best thing is for US to use it and create good things.
An integrationist perspective was recommended… that we should integrate our way of thinking of what it means to be a human person vs digital. The human realm is also digital.
When we build in AI, ask — are we helping folks be safe? Are we helping them in their life? These are the questions to ask when you’re building models. AI moves so fast — we have to be vigilant, knowing that both angelic and demonic are both in the realm of AI. As meaning seekers, we can derive meaning from our interactions with AI.
This idea of seeking meaning in the age of AI aligns with my favorite YouTuber David Shapiro and his recent thoughts on shaping a “meaning economy.”
Interview with a Social Humanoid
Another highlight of AI4 was that we also got to interview BINA48, the work of the LifeNaut Project. Bruce Duncan, the Director of the Terasem Movement & project leader for the LifeNaut Project, spoke and had Bina on stage. We followed him for an exclusive interview with Bina, which will land on my YouTube channel soon. Weird Science fans, get ready. 😉
The People at AI4
Highlights of my trip were meeting/reconnecting with: Dylan Jorgensen, Cyril Gorlla, Arun Verma, Bruce Duncan, R. York Moore, Elijah Chang, Xinqiao Zhang, Nick Almond, Gary Oppenheimer, Blima Ehrentreu, Chris Clemens, Matthew Chavira, Tony Chow, Jeff J Hunter, Alex Zervakos, John McKenzie, Pete Pachal, Alyssa Abshire, Juan Rodriguez, Senior Flight Instructor at US Army (boy, THAT was an interesting chat!)… and too many to count.
Big shoutout to the Content at Scale team members that went with me.
Jeff Joyce is our AI & Media Director, and he kicked off some great chats with the folks I met.
Justin, our founder’s son, is 18 and has just dropped out of college to make AI his future. I’m a big fan of someone taking that level of initiative that young. Bright future ahead of him.
I already can’t wait to go back in 2025. I’m applying as a speaker, and hope to do a whole session on AI + marketing in 2025.
If you missed this event, and you’re even remotely interested in AI, you need to make it out here next year.