Gabriel Uribe

The future of AI, & how I live life 🦿

β€’

Historically when friends/family would ask about my thoughts on AI, I'd meander my way through a response illustrating its recent evolution, present utility in my life, and the wild road ahead.

My plan is for this wordsoup to be a written version that I iterate on, highlighting how AI's trajectory gets priced into my life philosophy in several areas.

For now, this is an early, public draft that I vomited over a weekend.

Situational Awareness: The Decade Ahead

Let's first set the stage with an essay that aligns with my expectations in AI, which inform my decision-making.

Situational Awareness: The Decade Ahead is a 165-page PDF that captures Leopold Aschenbrenner's (formerly OpenAI) thoughts on artificial intelligence.

How it has developed over the last few years, where we are now, and where things are going in the near future.

It's long. While it was released in June 2024, I only finally read it in August 2024 over a 6 hr flight.

My summary of the relevant-to-this-blog-post parts of the essay for you:

  • we went from toy (GPT-2) -> high schooler (GPT-4) in just four years
  • there are three levers to better intelligence:
    • effective compute - more & better computers to train models
    • algorithmic efficiencies - better algorithms to get same results with less compute required to train the model
    • "unhobbling" - unlocking raw potential of existing models with simple techniques
  • following the trend lines, we get another step-function increase by ~2027, or AI researcher-level intelligence
  • with a horde of 24/7 AI researchers, we reach superintelligence by 2028, 2030 conservatively
  • most peoples' mental models around AI today revolves around its (un)intelligence in non-software domains, but it only has to get better at software to reach AGI (artificial general intelligence), or when AI has the capacity to improve itself, before we reach superintelligence and 'solve' other domains too
  • AI is constrained by energy and compute, hence the race to securing energy contracts & building the trillion-dollar cluster

I'm not qualified to comment on the timelines provided for AGI or ASI (artificial superintelligence), but directionally, feels right πŸ€·β€β™‚οΈ

Previous estimates for the singularity, when the progress of technology becomes irreversible, were 2045. But to me, that also seems too conservative now.

Link to this headingWhere we are

ChatGPT launched in November 2022 with GPT-3, not even two years ago at the time of writing this post.

With the improvements since launch, along with competing models that are better at some tasks, we now have a vibrant ecosystem that has provided a step-function improvement in the way at least some of us work.

I feel like a wizard casting a time-bending spell each time I zero-shot a prompt and the model with its tools solves a task that would have taken me 5-20 minutes. Eg from setting up a web scraper to generating a new functional page in one of my apps.

Models are now also multi-modal. They can process and generate images, audio, and text. I can upload a screenshot of a design or describe a feature on v0.dev and get working code back.

Functional prototypes are the wireframes of today. You no longer need to know how to code to build prototypes!

It's magical. The closest comparison I have is when I started using Google in the early 2000s and suddenly felt like I could learn anything.

If the internet democratized access to knowledge, then AI is currently democratizing access to intelligence. We now have experts in multiple domains providing feedback to these models, allowing the rest of us to benefit each time we interface with them.

Even just in the last few months since Leopold's essay came out, we've seen the release of o1, which is the highest performing model in language & reasoning tasks by a wide margin, and Advanced Voice Mode from OpenAI which allows you to chat with GPT-4o in a human-like manner with humor, interruptions, and intonation.

These are major improvements. I can have a conversation with an AI and be made to feel a certain way. No wonder companies like Character.AI are already worth billions.

I cancelled my Duolingo subscription awhile ago. I either watch Chinese TV shows or YouTube, meet with my Chinese teacher, or chat with ChatGPT for practice. That's my stack.

The wildest part to me is that the advancements in software engineering that I've experienced in 2024 alone are mostly using GPT-4 level models.

We've just managed to 'unhobble' and reduce the latency on their capabilities dramatically since the initial release in 2023. And you just know GPT-5 level models are cooking with new clusters coming online. In the meanwhile, we may just keep getting more efficient GPT-4 tier models that run on inexpensive hardware or at lower latencies providing ever-increasing access to their utility.

Link to this headingHow this all applies to my life

I'm young enough that in almost all cases, the singularity happens within my lifetime. Well before I'm of retirement age, even.

I don't even believe in retirement. I subscribe to the Japanese philosophy of finding your ikigai, your purpose in life, and doing it until death (which with the singularity that becomes a choice). A key piece of the life puzzle.

So my entire philosophy is to figure out what I love doing that provides value and makes money, and then do it forever. The notion of retirement does not compute, even less so with the pipeline to ASI and post-scarcity.

I instead aggressively invest my resources into building up myself, expanding my own humanity and supporting the people around me now.

That's the logical side of myself. But I balance eastern and western philosophies in my worldview. I believe feeling and presence are just as important.

So I'm a flow state maximalist. Flow states from singing, meditating, coding, writing, conversing, lifting, running, walking, climbing & beyond.

I've removed as much of the unnecessary cruft from my life to live in flow.

That's what provides me lasting fullfilment. And the path to fulfillment is what matters, ASI or not.

I never believed in deferred life plans. Even less so now. So don't defer your life. Take the trip, start the company, learn the language, etc.

I expand my earning power not to work more to earn more, but to live more. You can't buy back your youth.

And to be clear, I don't suggest living on borrowed time. Find a sustainable path that enables you to live the life you want to live, but don't overoptimize for money that you're just going to put away indefinitely or spend on worldly things that don't delight you.

Overall, relationships/experiences > money. With a technological singularity on the horizon, I value money much less than I did previously.

Link to this headingClosing thoughts

Make your own assessment. No one actually knows how things will play out. History is being made as we speak.

I could die tomorrow, we could enter World War 3, AI could kill us in a trillion ways, eg by inundating the world with paperclips, the works.

I don't fixate on these negative futures, because I don't operate from a place of fear, but it's more fuel to my life choices. Most of us know people that have died early, before they could realize their dreams or enjoy what they sacrificed for.

Regardless of the timelines, I am the most at peace with my life decisions than I've ever been.

There has never been a better time in the history of humanity to take risks. Despite what you may read on the news, we've never had so much abundance.

Granted, don't overconsume either. Because that leads to physical and digital obesity.

Anyway, build meaning not matter or money. Because the former won't be lost or deemed irrelevant regardless of the future ahead of us.

Have any thoughts? Message me at any of my socials below.

Link to this headingRelated

Link to this headingLooking for more posts?

Looking for a full-stack Next.js/iOS/visionOS developer for your project? Email us at hello@skyporch.co.