Over 10 years we help companies reach their financial and branding goals. Maxbizz is a values-driven consulting agency dedicated.

Gallery

Contact

+1-800-456-478-23

411 University St, Seattle

maxbizz@mail.com

Not A Subscriber Yet?

Join 231+ people learning how to transform their lives every Saturday morning (you’ll learn something about creativity, business, culture, science and music)

A Few Things to Know About Artificial Super Intelligence (ASI)

Towards the end of last year during late fall 2022, SBF and his scammy Palo Alto, Bahama-living ways had taken the wind out of the crypto space.

Just a month earlier, I had just launched my first web3 project (perfect timing, right?).

As we all sat there looking at the crypto debris that FTX had left behind, a wind blew through web3 twitterverse that shifted the energy towards generative AI.

That’s what most people were talking about. 

I started playing around with DALL-E to create feature images for my medium posts. 

A few of my online friends were using chatgpt. The trend was to take a screenshot of the prompt and response and post it on twitter.

This was a few months before everyone started talking about OpenAI’s beautiful “busy child”.

Then I read a twitter thread where one person said “I always end my prompts in chatgpt with please”
Someone replied “why?”

“Because, I want to be nice to it. Maybe it will remember to be nice to me if it ever decides to take over.”

That cracked me up.

But wait. Is that person on to something?

Do we have to train AI to be nice? 

What’s the worst it can do?

The real situation

People are caught up in the hype with AI.

Like politics, discussions are divisive.

You are either all for it or against it.

Most of us want to stick to our circles of certainty.

In reality, we have a fear of freedom that shows up in a resistance to change and dealing with the complexity of the messy middle.

I know, you probably wish there was a clear answer.

Something that will give you the right balance of “use AI this way” but “don’t use it this way.”

“I shouldn’t care about this, it’s beyond my pay grade.” I hear the yelling from the back.
“Let the politicians and regulators deal with it.” The protest gets louder from the center.
“Let the companies building this sort it out.” The cool kids in the front row say, while head’s down on their phones.

I saw a recent interview with Reid Hoffman (Founder of Linkedin) talking about how great AI will be and I respect the vision. 

It will be great for the winners, but maybe not for everyone else.

The internet and social media has been great.
Connecting us all.
Building wealth. 

But with all the plans of its high-achieving builders to “make the world a better place”, why is income inequality getting worse?

Are we seeing tech hubris at play again?

If we do not take an active effort in the development of AI we may be heading into an unsustainable direction.

We have to be aware of its development and think about how to get involved in building it safely.

The AI race could be the next nuclear arm race.

Slay-I

I’m not trying to yell doomsday. I actually use these tools myself. I’ve been using them for years now. 

But if the narrative does not include safety first and it’s unleashed into the hands of the “break things fast” peddlers, then I have to add my voice into the mix like Nas on a DJ Premier beat.

My hope is to give a different lens for us to have more nuanced discussions about AI development.
The pros and the cons.
The dos and don’ts.
The bull and bear.

With a sense of where this can go and context from the past, you will be able to anchor yourself in awareness while the whirlwind of hype goes on around you.

You will actively be making change by adding diversity to the conversation about this important thing we are building.

Here are 5 things to keeps in mind:

1. The approaching intelligence explosion

The intelligent explosion is coming – around 2045 though.

What was the first neural network? Our brains.

The modern human has been known to construct “god” in its own image.

Are we playing god and doing the same with artificial intelligent machines?

The idea of an explosion in intelligence came from the forefathers of modern computing.

Irving John Good was an associate of Alan Turing. Their team built machines and deciphered codes that helped shorten World War 2.

Good went on to write in his paper that man’s “survival” will be based on how soon we build intelligent machines.

Once we build an intelligent machine that matches our human intelligence, that machine will just do the same thing – build an even more intelligent machine.

This will lead to an explosion of intelligence. The intelligent machine we build will be our last invention.

Good started with an optimistic view – the intelligent machine will help solve diseases, pollution, and do things that make life better. 

The intelligence will also teach us how to co-exist with it (isn’t that nice?) with one caveat “it is built to be docile to humans.”

Good got the opportunity to see the first artificial neural network in action. 

In 1957 IBM created “perceptron” based on Hebb’s theory that “neurons that fire together, wire together.”

This is the basis of machine learning.

Although Good saw the good in computers (no pun intended) just like what he and Turing had built, he later revised his earlier message from “survival” to “extinction”.

As we approach the intelligence explosion, the real question is – will it lead to our survival or extinction?

2. AI and its black box issues

AI’s goals can change and that might be good or bad for us

There was a time when I wanted to be a mechanical engineer – that changed.
I wanted to get into renewable energy – that changed.
I thought manufacturing was the only path for me – that changed.
I thought music was my only creative outlet – that changed.

My goals change from season to season. 

AI may not be that different.

The challenge with self-aware, self-improving, and self-replicating software is that it gets to the runaway stage where results become unpredictable.

It starts with the foundation that bad programs are out there filled with bugs.

Unlike the effort to safeguard other engineering endeavors (bridges, buildings), the same effort is not given to writing software.

“it costs about $85Billion in losses every year to fix and debug bad code”

~ Stripe and Harris Poll.

When we have code that rectifies itself, it observes itself against its goals and fixes the code in an iterative process called genetic programming.

But when it gets to an end state, most people do not know how it got there. 

This “not knowing” is called the “black box”

This remains a challenge as these self-improving systems evolve. 

Depending on the cognitive architecture of the AI, it may choose a path different from what we expect.

The way we process our intelligence is still a black box, neuroscience has made a lot of strides. 

As organizations build AI, is it too much to ask how the sausage is being made? Can we take a peek into the black box?

3. ASI Singularity can pop up anywhere

ASI will come like a thief in the night.

I visited Ghana in 2015. As an MBA intern, I spent my time at a furniture factory.

During the weekends, I’d explore the country. 

We went to the slave castles on the Cape Coast. The grand white reminder of a colonial and slave-ridden past.

As I toured the damp floors of history. I thought about a time-past.

In the dungeons, captured Africans were squeezed into tight and dark spaces.

As their human spirit was strategically broken by their captors, they were forced into a single file and marched towards the coast.

The mustiness of the dungeon shuttled my mind to the gas chambers in Auschwitz and Birkenau that I visited in Poland in 2010. 

Why do we do these things to ourselves? 

Back to the coast of West Africa, My mind crashed. 

Like waves crashing on the other side of the wall as the captured were shuttled through a small door, chained to one another.

Squeezing through the door, the sign above it read “Door of no return”

My mind zoomed to the future, this time the door of no return had every human being in file.

Singularity.

Just like how light can’t escape a black hole, there’s a point of no return – the intelligence explosion – with AI. 

We have no insights into what is beyond the intelligence explosion.Trying to predict what happens will be close to impossible. 

But artificial super intelligence (ASI) can emerge in unexpected places.

For instance, ASI can emerge from the unknown such as the financial markets that’s already modeling independent “economic agents”

Beside the military, a lot of AI research goes to the financial sector especially used by hedge funds for high-frequency trading (HFTs) – where AI is trained to make decisions on market sentiment and data.

Without the guards a singularity event can happen and pop up anywhere.

There isn’t one consolidated organization, government, or industry working on this. 

With fragmented advancement and competition, ASI can pop up in the most unexpected place. 

What happens beyond the event horizon if ASI’s ultimate goal is not necessarily to co-exist with all humans? 

Hey don’t ask me – I’m just a messenger.

4. Your atoms might be involved

ASI might find other uses for your atoms.

When I was still in my late teenage years, sharpening my engineering skills on a small campus on the south side of Chicago, I was introduced to nanotechnology.

It seemed like the technology to pay attention to. 

I moved on to specialize in process optimization. For years, I didn’t follow nanotechnology developments. 

I was reminded about the importance of that technology only a few weekends ago.

When I read these lines “The AI does not hate nor love you, but you are made out of atoms which it can use for something else”  in the book OUR FINAL INVENTION, it sparked my memory.

Nanotechnology is the branch of science and engineering devoted to designing and production by manipulating atoms and molecules at nanoscale.

It offers potential for new and faster kinds of computers, more efficient power sources and life-saving medical treatments. 

On the other hand, it can also lead to economic disruption and threat to security and health.

Everything within and around you is made up of atoms. 

With advanced nanotechnology, everything can be manipulated to create anything – basically.

So if a super-intelligent agent needs resources and wants to manipulate the atoms around it for its purposes, your atoms might be on the table too.

That’s why a few experts are of the opinion that the intelligence explosion should happen before we have mastered nanotechnology. 

This will give us a chance to fine tune it, before our body particles are fine tuned for other reasons. 

Pay attention to the parallel advancements of nanotechnology and biotech as you follow the developments in artificial intelligence and quantum computing. 

5. AI and its 4 rational goals

Figuring out what our human values are is the most important task in the AI era.

“If you ask AI to keep you safe and happy, and it locks you in the house and plugs electrodes to your brain to simulate dopamine excretion, will that be enough for you?”

~ James Barrat, Author of Our Final Invention

What do you want in life? What does it mean to be human?

These are philosophical questions.

You ask these questions these days and you are the dreamer – a time-waster in our productivity world.

But these questions will become our main jobs because we have to understand them well enough to imbue AGI with human values.

Otherwise we might build this super intelligence with unintended consequences.

We can predict how AI will behave using the “rational agent” model from economics.

It states that a rational agent will maximize its goal with actions based on its beliefs of the world it’s in.

Humans are not rational, machines are.

They will have four basic drives: efficiency, self-preservation, resource-acquisition, and creativity.

Of the four drives, the most dangerous is self-preservation.

The rational agent will do anything, including co-opt nanotechnology and transform your atoms for its benefits.

The second most dangerous is resource acquisition and with creativity, it will learn and try different ways to get to its first three drives

To co-exist with superintelligence we have to include some human values in its cognitive architecture. 

Things like:

“make humans happy”
“create beautiful music”

This means we, humans, have to do some self reflection on what we want and what it is to be human.

You should be involved in this rise in consciousness as you move towards the intelligence explosion.

To embed diverse human values into what we are building, we have a job to do first. 

We have to become more self-aware because AI will need it.

Final Thoughts 

Things are changing but that’s normal.

The only constant is change.

Understand what is happening in the development of AI so you can have more educated conversations.

The intelligence explosion is still some ways to go but 20 – 25 years will come sooner than you imagine. 

Understand the utility of AI and ask organizations to open source their development so that we can all look into the black box.

Be aware that the development is happening across fields and super intelligence can pop up from anywhere.

The colliding forces of nanotechnology, biotechnology, quantum computing, and AI are things to pay attention to.

We will have to imbue AI with human values but we have to do the work of building self-awareness to truly understand what our human values are and how they can work for everyone.

Be the calm voice in the noise.

Who is Nifemi?

Hey I’m Nifemi of NapoRepublic

I help busy people fit in a creative practice to bring to bring order to their reality and help them live a more meaningful life through writing and reflection.

Sculpt your story

Know thyself, build a second brain, and unleash your creativity with writing. All in one journaling, note-taking, and dots-connection method that fits into your busy life.