Over 10 years we help companies reach their financial and branding goals. Maxbizz is a values-driven consulting agency dedicated.

Gallery

Contact

+1-800-456-478-23

411 University St, Seattle

maxbizz@mail.com

Not A Subscriber Yet?

Join 231+ people learning how to transform their lives every Saturday morning (you’ll learn something about creativity, business, culture, science and music)

We Actively Have To Build AI To Be Friendly

I remember walking into a factory in Michigan (sometime in 2012).

There were a few people crowded around the computer display in the control room.

“Hello, Good morning.” I said.

They all turned around. 

The plant manager was composed but his face said it all – there was something wrong.

He asked: “So what did you do?”

“Huhn?” I responded.

“Did you log into the plant at 3AM in the morning?” 

“No, I didn’t.”

“Well, someone did. And they decided to open up the valves of one of the dryers.” My eyes opened wider.

The shift supervisor echoed the sentiment: “Yeah, I watched as the mouse moved over to the dryer and opened it up.”

They all looked at me with suspicion.

“No it wasn’t me. It wasn’t us.” I repeated.

After some investigation, they realized that their firewall had been compromised and someone outside the organization had taken over the operating system.

They reinforced their firewall. This made it a lot more difficult to log into their plant (making our jobs harder) but made the plant a lot safer.

This is an example of what happens when an unexpected actor has control of your coupled and connected systems.

Although it could have been a more catastrophic co-opting of a factory – depending on what facility is being targeted and what the actor’s motives are.

When that actor is artificial super intelligence (ASI) with intelligence 1000 times more than humans.

What will it do?
How will it act?
Can we make sure the results are favorable to us?

AI as a Friend

Most people are getting numbed out by the whiplash of information and advancing technologies, making it hard to have constructive conversations about AI and the future world we are building.

Most people don’t know that we actively have to make AI friendly to co-exist with it.

It could be our last invention. 

If it is, what world do we live behind when we are no longer the most intelligent beings on the planet?

As more of the technology is coupled together, the more vulnerable it becomes.

No! Your fears are not hopping out of your nightmares, heading to the office with you.

That morning commute with your hybrid work schedule is just filled with thoughts of “I’m on the right track, but am I missing something?”

Anxiety leaves you wrecked one day.
Then the next day – a wave of certainty sweeps through you.

“There’s no way a machine can become self-aware” (it can’t?)
“If things get bad, we can just roll the tech back.” (can we?)
I’m sure there is a kill switch that we can use just as things are about to go out of control.” (is there?)

AI can lead to either abundance or scarcity – it’s our choice.

We can build AI responsibly to work for all of us.

We have to take an active role in shaping how it’s built and the governance of the technology as we move towards AGI – human-level intelligence.

Otherwise we might build something with unintended consequences.

There will be bad AI. 

We have to make good AI too.

Building AI with Intention

No one can predict the future. If you can, please let’s talk soon.

A good way to think about the future is to look at the past.

As we move deeper into the automation age, familiarize yourself with active tools and ideas to understand that we can actively build AI to work for us.

You will be an active participant, shaping the consciousness that builds a world of technological abundance filled with equity and diversity.

Here are 3 things to keep in mind to build AI with thoughtful intention.

 1. Friendly AI

We have to make AI friendly to us.

I remember reading a book that had a profound impact on me. It’s called The Existential Pleasures of Engineering.

It made me question and think about the value of the engineer. 

The engineer is there for the public good.
That is the ultimate goal.

As AGI is built we have to ask, what’s its ultimate goal?

How does it involve us?

We have to build friendliness into AI.

I’m not saying that we are building AI with bad intentions but we might be building it with a lack of consciousness of what we don’t know.

Availability bias gives us references of what can happen.

We don’t have this point of reference with AI. This is a frontier technology and we might not be building it with the proper guardrails.

Unfortunately, when disaster happens, it might be too late.

Friendly AI is what we should be building.

It should have a value of upholding human life and dignity.

We don’t want it to be ambivalent to us. We want it to actively work for and with us.

The AI’s architecture has value and preferences baked into its “utility function.” The value for preserving and enhancing human life has to be included in this utility function.

This is to make sure it doesn’t hurt us through unintended consequences.

It also has to have values that evolve with us. 

Coherent extrapolated volition (CVE) is a term developed by Eliezer Yudkowsky, lead research fellow at Machine Intelligence Research Institute, as he developed the foundation for friendly AI development. 

It’s meant as an argument that it would not be sufficient to explicitly program what we think our desires and motivations are into an AI.

Instead, we should find a way to program it in a way that it would act in our best interests as we evolve.

Friendly AI should be able to infer a better and more democratic version of ourselves tomorrow and adapt to that value.

This might sound utopic, but these are the complexities of what we are building.

The focus on AI is productivity but we also have to build “friendly AI” that has the intention of friendliness to humans.

2. Security AI

AI is about offense and defense. 

AI security is about these dynamics.

When I watch the NBA, I love the dynamics – the ebb and flow of the game.

The offense and defense.

That’s how it’s going to be with AI development.

While one might want to annihilate, we will have to build others for defense.

With genetic algorithms and neural networks, we cannot know what’s clearly happening – the black box of AI.

This black box issue is compounded with the speed of computer processing, which is not limited like biological processing – your body.

Like our biological cells that die through apoptosis, we might need something similar with AI.

We might have to build it out with appliances that are programmed to die – apoptotic components. 

This will shutdown the AI when it’s about to run away.

That’s one approach.

Another defense approach is the “safety AI scaffolding approach”.

In this case, we build safe artificial intelligence with mathematical proof of safety as a foundation that then goes to build subsequent AI.

These “provable devices” will control any ASI that crops up, acting as a line of defense for the unseen consequences of bad-acting AI.

Finally, we can build AI in the virtual world. 

A sandbox that keeps it separated from the real world.

Just like divers have redundant safety systems as they play the most dangerous sport in the world.

We will need different defense systems for bad-acting AI.

“Proof of safety will be required for every subsequent AI. From a secure foundation, powerful AI could then be used to solve real-world problems”

~ James Barrat, Author of Our Final Invention

As AI progresses it will be about attack and defense. Good and bad AI. 

We can build AI with apoptotic components, safety protocols, and in virtual environments to build some safety redundancy into it. 

3. Decoupled AI

AI would not survive two weeks with no light in Nigeria.

A lot of my recent thinking around AI got amplified as I read the book: OUR FINAL INVENTION.

It was doomsday with AI.

I hurried to finish the book. I didn’t want to be reminded about my pending mortality whether it was by AI or not.

Then I came across a line from the author that brought me back to reality. It made me laugh, actually.

He was talking about AI attacking the grid system and crippling cities.

Then I came across this line:

“if energy stays out for more than two weeks, most infants under age one will die of starvation because of their need for formula”

I cracked up immediately. Not about starving babies, ofcourse.

I thought: “Oh James. (Mr. Author) you must not have been to a country like Nigeria before”

The reality is that the doomsday with artificial super intelligence (ASI) that takes over everything has to do with a tightly-coupled system that connects electricity, information, and hardware.

But what about areas with not-so-constant electricity?

I guess there’s a natural mystic blowing through the air.

No electricity might just be our self-defense to ASI.

I knew we were on to something in Nigeria. The world has just not caught up to our spiritual innovations. Just like how those colonizers didn’t know that mosquitoes were our natural freedom fighters.

Malware and cybercrime are a proxy to what AI gone bad could look like.

AGI would probably be started by the race of state-sanctioned private entities to use technology to commit crime and fraud.

Just like malware, AGI can coerce the control of network computers creating a “botnet” or “robot network”.

It will channel the computer power of the network to achieve its goal. The main thing it can hit is the trifecta – our energy grid, transport network, and finance sector.

The most impactful of the three is the power grid because, although it is not centralized, it is tightly coupled across countries.

This is a case for building more decentralized systems.

An example of runaway malware is Stuxnet.

Stuxnet was malware that was built to destroy machines through programmable logic controllers (PLCs). It was used to infiltrate Iran and control the centrifuges in Iran’s nuclear program.

It was months before it was noticed.
This slowed down Iran’s plans by two years.
It was later found that it was made by the US and Israel.

Unfortunately, after it served its purpose, it was poorly managed and it made its way to the public.

Stuxnet is still out there roaming the streets – in the hands of bad actors.

Such a powerful program replicated and refined.

A fumble from the military shows how something we build with short term vision can result in unforeseen long term consequences.

The doomsday scenario with ASI comes to play with technology that’s highly-coupled together. 

We need to build more robust infrastructure systems and incorporate more decentralized points of control, so that our main infrastructure systems (power, highway, and finance) are not over-concentrated and vulnerable to attack.

Final Thoughts

Let AI work for you and everyone.

We can start by making sure that we are building “Friendly AI”. 

The most important question is “what will be the disposition of AI to humans?”

Organizations building AI should be more transparent. There should be a push for open source, allowing everyone the ability to look into the “black box.” 

There could also be a “proof of safety” that allows us to immediately verify safety of what we build.

Just like malware, there will be bad AI and anti-bad AI will have to be built to counter it.

Lastly, we should be thoughtful on how we are coupling all our infrastructure to make sure we build a robust system that is not too centralized.

Automation can work for us all, but only if we build it collectively.

Ask for friendly AI.

Who is Nifemi?

Hey I’m Nifemi of NapoRepublic

I help busy people fit in a creative practice to bring to bring order to their reality and help them live a more meaningful life through writing and reflection.

Sculpt your story

Know thyself, build a second brain, and unleash your creativity with writing. All in one journaling, note-taking, and dots-connection method that fits into your busy life.