The Human Factor: Navigating the Age of AI

I typically am optimistic about the future's potential. Lately, however, I've been concerned about our direction. For the last few decades, humanity has been developing a digital landscape. Since the rate of change was incremental, the conversations about how the technology could affect things were manageable. See Tim Urban's The AI Revolution: The Road to Superintelligence for more on the rate of change happening.

At first, the progress was exciting. Could we make the behemoth mainframes small enough to enable access so everyone can use them? The answer was yes. From there, the birth of the Internet led to massive social media platforms, enabled by PCs shrinking to a device that fits in your pocket. While there is much to criticize, many of these layered achievements are marvels of modern engineering. And yet, as the direction has continued down this course, it feels like we are on the eve of what will define the next age of humanity.

As many of the world's largest corporations race to be the first to make a significant breakthrough in AI, I can't help but wonder if this is a dead end. Even if we have these great things in the worlds of bits and pixels, it doesn't mean much if the world around us falls into disarray. It doesn't matter what's happening on your favorite platform of choice; if you look around you, everyone is dejected and feels they have no future.

And yet, it seems corporations and governments are excited by the promise of what could be instead of discussing the problems we have in the world at large. I know that's not to say AI couldn't help solve our most pressing issues; it's a question of incentive and optics.

How many cautionary tales have sci-fi writers written about the dangers of corporate interests left unchecked or government overreaching pushing people to buy into the world they want? Looking from the top down, a current is pulling down humanity. Our path toward computing brings us closer to an end goal: one part dream and one part nightmare.

On the one hand, doing more with less is everyone's dream. Until the Industrial Revolution, we had to do everything by hand or at least take action with machines.

Now, much like the US, we seem to be on a trajectory to outsource not only physical manufacturing to other places but also to outsource everything possible. Doesn't anyone worry about outsourcing too much? At its core, doing hard things gives each of us character.

None of us are fully formed out of the box, and I think removing friction doesn't help make people and, by extension, the systems they interact with better. As humans, we thrive with just enough struggle, and as we get better at it, we can tackle ever larger and larger problems. Maybe, at the very least, we can pass on our ideas and lessons to the next generation so they can leapfrog us and not have to relearn as much.

So, I think the fundamental problem with developing AI is that if you expect a system to tell you what's best or how to solve things, there’s no reason you or I would want to be better. Better yet, how can you even ask questions to get the correct answer if you can't think or ask questions? Also, if we get used to these systems solving complicated problems and because we have become too dependent on them not now but, say, a generation from now, how would descendants have enough real-world knowledge to solve issues without omniscient helpers? We should be looking at technology that accelerates adaptability, not robs it.

Throughout history, we evolved to live in an ever-changing environment and struggle for survival. However, now, we are creating circumstances where some fraction of us can put our heads in the sand and stagnate. It's not to say that is already possible, given the prevalence of attention-seeking apps and devices—read The Scarcity Brain for more on that. AI might supercharge the behavior of some of the worst apps that have started pulling people towards it.

We potentially have two revolutions happening in parallel: increased automation and AI that can now do admin work. Some might think that this paves the way for Universal Basic Income (UBI), but I think it discounts basic human needs.

Yes, we all need money to survive. But in the same way, we also like feeling needed, even if that's something simple. We need to ask ourselves what each of us values. How do we lead a meaningful life that seems devalued and furthered by the chase of profits and efficiency?

I think rejecting AI entirely might be too harsh, but we must pump the breaks and see what we want the tool to do. We as a species need to have a conversation and be specific on what limitations we want to put on things. It's one thing to progress, but if we let core aspects of human flourishing fail, we don't stop to consider the ramifications.

The biggest question is this: As we develop AI systems, we need to ask ourselves where we, as humans, fit in. Do we collectively pass the keys to that system and, by extension, those that own and operate them, or do we take a stand to say humanity should be in the driver's seat even if these AI systems are faster, more efficient, and cost-effective?

Last week, DeepSeek shook the tech world with their open-source models, showing that you can do much with fewer resources. No doubt, all AI labs are scrambling to replicate or implement what DeepSeek was able to do. But again, this seems to be a tale as old as time. The upstart who was underfunded and didn't have the best tech punches way above its weight class to shock the world. The big breakthrough is the reasoning model showcased by DeepSeek R1 model. (Though it can be taken with a grain of salt as anything coming from China is notoriously sketchy on specifics. But in this case, experts will pick them apart since white papers have been published.)

Until last week, the US was comfortable about AI and tech trajectory if the US held the keys to progress, but now this calls into question if the US will be holding the keys or if someone else will enter the playing field and take the spoils. Welcome to the future! Instead, the future is here, just not evenly distributed as William Gibson is quoted.

Now, let's return back to Earth; what can us mortal humans with limitations and flaws do in the near term? Well, step one is to embrace the messiness of human interaction. Go analog, cultivate spaces with friends and hobbies you enjoy in person. As the AI systems invade, the only places they won't be is where you don't let them. The other part is that when creating things, use fewer shortcuts to edit. For example, maybe keeping in stutters and the tiny imperfections in our podcasts make it more human. There might not be misspeaks or stutters and gaps in the future world. I think for this article, it's going to be just thoughts as they are instead of editing for perfection. At home with people we care about, we should debate AI's role in our lives. Because I, for one, don't want the world to turn into just an AI blob. There's a richness to the human experience that will be compressed if we turn over ourselves to these systems, and we will only atrophy ourselves in the process.

If we collectively start having conversations about what we want the future to look like, then the tech companies have to listen because, at the very least, they listen to where the money is and if we make a stand for the future we want and not for the one they think we want. Going back to my sci-fi example, in so many stories, they have people living in VR worlds, and yet during the pandemic, people got a small taste of this always-online world, and I think people have rejected it. We don't want to be glued to our devices. After a while, something is lacking about the 2D representation of people, and you'd rather spend time in person together.

So, we get to vote based on our actions and conversations. Even though it may seem like more powerful people than one of us is directing the show regarding technology, it still falls on the shoulders of how regular people want to use technology and whether the trade-off is worth it. AI might seem better than humans in most skills, but it still requires humans to function, so if we stagnate, it stagnates. We get to decide the future of our story, not pass it off to someone or something else.


Keep Exploring!

The podcast below helped shape some of the thoughts in this post, but information and technological breakthroughs are being published rapidly. This year, we will continue to see better, and new applications emerge that were only once thought of in sci-fi. Buckle up. It’s going to be a wild ride!

Previous
Previous

Elevate Your Mental Health with These Mindful Eating Practices

Next
Next

Unlocking New Ideas: Strategies to Reinvigorate Your Creative Potential