I was leaving an ethanol plant in Indiana, sometime in 2010.
The operator swiveled around in his chair and said: “you’re leaving already?”
I looked at my watch and replied: “Yeah, it’s 10:30pm”
“So I should just let it run?” He asked, referring to the model-predictive software that we had just installed and commissioned at the plant.
“Yeah, it should be good to go. I’ll check in on it when I get to the hotel after dinner”
“So you can control the plant from your hotel room?” He said and looked at his colleague “fancy huhn? Tech from Texas. OK. It’s crazy, they move all these things and they can control it with the internet too.”
I had an uncomfortable smile on my face. I knew what was coming next.
“They’ll soon get rid of us. This plant is going to run on its own”
This was what it was like implementing plant-optimization software, built on neural networks – the foundation of AI.
13 years later, AI has come a long way from where it was then.
It is likely going to affect your life directly in a few years too.
Nothing New
If you were to ask me, the only AI that has the answers is Allen Iverson.
But let’s crossover to 2023.
How’s that ankle doing? (you can’t guard these words, fam. Keep up!)
Here’s the problem – everyone has an opinion about everything.
AI is really a technology buzz right now but people are not talking about the real benefits and dangers of the technology.
The hype is real. You can’t have a conversation today without it veering into some GPT chat.
There are two camps:
Camp 1: “all for it…LFG”
Camp 2: “No please, this is moving too fast.”
One day you don’t care.
The next day, you’re praying to your computer to not leave you behind.
Some people will stop reading this letter “Oh no, not another article about AI”
The avoiders: “Can I turn left without a mention of somethingGPT?”
The OG engineers: “Please, this thing has been around for years, what’s the big deal.”
The righteous: “Isn’t this just going to make everyone lazy?”
The reality is AI will affect everyone in one way or the other. It will touch our lives in both positive and negative ways.
If not used wisely, it has the same power as atomic energy and we might be in a new AI-arm race.
It could lead to the benefit or detriment of the collective good, so we need more informed conversations about it.
The AI Journey
Artificial intelligence is going to disrupt something in your life whether you like it or not but don’t get caught up in the hype.
Become better versed with what it really is.
This will let you cut through the noise. Get to the core of the discussion.
You will get clarity on what to focus on, know how to leverage the tool to enhance your life, and be part of the conversation that builds it responsibly to work – for us all.
Here are four steps for the journey of AI to keep in mind.
AI
AI is not thinking – not yet.
Whenever I use ChatGPT, the speed at which I get a response – it seems like it’s thinking, but is it really?
Just the way humans talk to dogs, we have the sense that dogs understands all our words and is thinking the same way we think.
This is some form of anthropomorphism, where we transfer human meaning onto non-humans.
ChatGPT gives the illusion of thinking but it’s just stringing words together.
The reality is that you have already been using AI.
That goggle search.
The Amazon recommendation.
The toothpaste you use was optimized with AI.
The way Linkedin pops the hottest thing in your feed.
That’s already artificial intelligence – trained on a wealth of data (provided by you and I). The goal of this type of AI is clear – keep the user engaged, so the corporation can make more money.
What’s the goal of all the other AI being trained though?
Even if we are clear of the goal at first, what happens when it starts creating its own goals?
This is the “god in the box” syndrome. When this god is unlocked what will it do?
Similar to the Turing test named after mathematician Alan Turing.
A judge asks questions between a computer and a real person. If the judge guesses that the computer is human, the computer wins.
It’s the imitation game.
The computer is not thinking but it’s acting like it’s thinking.
We don’t fully know what “thinking” and “intelligence” is, we only know how it’s displayed.
As we build these tools, we need to be aware of how AI is programmed before it starts programming itself.
We need to be able to see inside the “black box.”
Even if you wanted to shy away from mainstream “AI”, the reality is that you are already using it and it keeps developing.
AGI
Artificial General Intelligence.
I got into my reading habit when I optimized those factories.
I would read books on the plane as I traveled to sites in the US, Canada, and Europe. One of these influential books was “The Existential Pleasures of Engineering”
When I came across the book, I thought: “pleasures of engineering?…Oh, please.”
Then I got into it and developed a better understanding for my craft.
The book stated that even though the engineer is developing solutions for a corporation that optimizes for profit, the main goal of the engineer is to boost the collective public good.
If a civil engineer builds a bridge to connect towns, they have to build it in a way that does not put people in danger.
Safety comes first.
This is a number one priority.
The question is: “are we doing the same with artificial intelligence?”
Through recorded history, we have not inhabited this planet with another thing that has human-level intelligence.
That’s all going to change with artificial general intelligence (AGI). AGI is when the neural networks that we train reach human-level intelligence.
In the most optimistic way, this will be like having a counterpart that can help us get your tasks done.
A strategy buddy that will help us think through our deepest challenges and even help execute on our ambitious goals.
That’s very optimistic because when something gains human-level intelligence, will it not focus on its own goals?
The next question in that line of reasoning is – what are the goals of human-level intelligent machines?
Human hubris will let us believe those goals will be aligned with ours.
But if we are not paying attention to what goes into the training of these AGIs, should we be surprised if they don’t align with us.
What happens with misaligned goals and when they become a lot smarter than us? In comes the artificial super-intelligence (ASI).
As we approach AGI, do we know the goals of the intelligence we are collectively training?
ASI (the intelligence explosion)
Artificial super Intelligence doesn’t have feelings for you.
Human intelligence is what helps us solve problems, create language, and craft new tools.
It’s our intelligence that makes it hard to unlearn something after learning it.
With our brains as time machines, we can’t unsee the past.
After 9/11, you can’t unsee a plane flying into a building as a weapon.
This intelligence allows us to ask questions like: is there a point of no return with AI?
The reality is that we have cognitive biases that make it difficult to perceive something we haven’t seen before.
We have a similar bias towards AI development.
The jump from AGI to ASI will come a lot faster than we think.
With genetic programming and recursive engineering, AGI will continue to improve itself faster and more efficiently to hit its goal.
This will lead to what some experts call an “intelligence explosion”, where AI is 1000 times more intelligent than humans.
This is Artificial Super Intelligence (ASI).
With the exponential speed of computing change, this “intelligence explosion” from AGI to ASI could happen in days.
A tipping point that changes humanity forever.
ASI can be apathetic to us. It might not care about our happiness or sadness.
We are not there yet, but when it comes, what will something that is 1000 times smarter than humans do?
You don’t have to think too far, let’s look at the past.
Remember when Columbus rolled up on the native Americans? How about the British and French in Africa?
When something thinks it’s more intelligent than you, it’s going to exploit it for its own goals.
This time around the colonization could be of you and me – all humans.
The people that talk about AI are either the dystopian writers or the optimistic technologists that use it for business.
The middle discussion – of how it’s going to affect you, me, and everyone – is missing.
“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else”
~ Eliezer Yudkowsky, research fellow, Machine Intelligence Research Institute.
Join the middle conversation before we get to ASI because you will be involved one way or the other.
IA (intelligence augmentation)
AI is not the only thing important.
IA (Intelligence augmentation) is equally important.
It was 4:44 pm on June 7, 2023 when I first wrote this note on my phone.
The exact time is quite insignificant and Jay-Z’s last album was not playing in the background.
However a few weeks before, I heard Elon Musk’s Neuralink – that allows brain implants – just got some type of FDA clearance.
New tech and news – moving so fast that I cannot even follow that development.
I wondered if that chip is going to play a big role in how humans and AI interact? Is this the foundation of IA (intelligence augmentation)?
With the rapid increase in intelligence, some people are of the opinion that the colliding forces of gene-editing, nanotech, and robotics (AI) will give us longer lives, solve all diseases, and ease our environmental issues.
The people in this bucket keep pushing the technology forward. Part of this advancement is the intelligence augmentation (IA).
The trans-humanists think that with the intelligence explosion, we will have to use AI to boost our own intelligence.
This is how we will remain relevant in the grand scheme of super intelligence.
Most of our discussion around AI is about some technology outside of us.
Intelligence augmentation (IA) is an important consideration because the new technology might have to be put inside us to maintain, to interact, to stay competitive.
Artificial intelligence is interesting.
BUT it is the colliding forces of biotechnology, nanotechnology, quantum computing, robotics, and artificial intelligence that we are all going to have to comprehend and deal with.
Final thoughts
We are not at AGI yet and AI can lead us in one of two directions – abundance or scarcity.
We have to choose.
A few weeks ago, I was meditating in my brother’s house.
It was quiet.
The sun was shining.
The Oakland skies were clear.
In the midst of the quiet, were thoughts interjecting. I thought: “do you want to be a human being, or a human doer?”
Next thought: “be in the creating economy, stop contributing to the doing economy.”
Then came some clarity that – AI can lead us back to ourselves but only if we move towards an abundant mindset.
I was reading a lot about time then too.
As we move from AI to AGI to ASI, understand the role you are playing in each step of the journey.
Ideally, we can go back to what we used to do before exploitation and the industrial revolution took us from communities to factories and cities.
It could free up our time to do more of what we want. It can create a space for universal basic income, reduced diseases, better climate, less wars.
OR we can use the technology to keep exploiting, widen the economic divide, oppress some more.
It’s our choice and we have an active role in directing.
AI is already a part of your life.
It will keep affecting you.
It’s probably going to change everything you know in the next decade.
Embrace an active role in steering it in the right direction for the collective good.