It is Now Impossible to Stop Artificial Intelligence | Mohammad Gawdat

Interview

Mohammad GAWDAT; We are so brainwashed by productivity that it is impossible to stop AI. You would look back in 2027 and say shut why didn't we listen. I don't mean to scare people a lot by the way. I just want people to wake up. I don't know how much louder I can scream?...

Mohammad GAWDAT; We are so brainwashed by productivity that it is impossible to stop AI. You would look back in 2027 and say shut why didn't we listen. I don't mean to scare people a lot by the way. I just want people to wake up. I don't know how much louder I can scream?

I get such aggressive bashing comments on social media. You're just killed my hope because humans suck. What was Trump doing? He was trying to play a political game against China. It's an arm’s race.

With AI there is sadly a point of no return. Yes, humanity was lost. People are gonna hate me for what I'm about to say. One thing that I have to warn the world about is...

 

(Former Google Business Development Officer Mohammad Gawdat Interview by Youtuber James Laughlin)


James: I have never had to give any of you a warning about what you're about to listen to but what you're about to listen to today I just want to give you a heads up that it may shock you, it may trigger you there are things that I chat about with today's guests that really challenge where we're headed as humanity and where the planet is heading. I sat down with the former Chief Business Officer of Google acts Mo Godat MO is also the author of scary smart. We talk about AI and you need to listen to what he has to say. First of all AI is coming, we can't change that, it's way too late to stop it. Secondly it's going to outsmart humans, it's almost already there and thirdly bad things will happen. These are three inevitables that Mo shared. I want you to sit back and take in. What Mo shared today is an incredible mind, very smart, has done incredible things with his life and he has so much value to offer.

James: Mo a huge welcome to the lead on purpose podcast!

Mo: Thank you so much for having me. I apologize it took a bit of time. I was on my silent retreat and that was 26 days this year so it's delayed us a little bit but it's always on time as they say.

James: Absolutely agree and can we just pause there for a moment and talk about that silent retreat. What is your experience when you're on that silent retreat?

Mo: Oh my God, I don't know how to explain it honestly but everyone I mean you have to imagine I lived two very different lives. I lived the life of the fast-paced executive, uh, for a very long time. I even ran my happiness mission for the first few years really as an executive. And you know I also have always been reasonably spiritual I was always seeking within. I was always trying to find my core strengths if you want. and unlike what most people think uh you know because in the corporate world in the world of doing you tend to believe that the only impact you can have is when you do and the more you do the better you become and the better your results. And the more you do the better your skills, which is true you can't deny that but at the same time very ignored I think is that the more you be the more you move from our hyper masculine view of the world which is all about doing to a feminine view of the world which is all about being you know. You can help someone by giving them a dollar that's an act of doing on you or you can help someone by being kind. Being is a very very strong impact and influence in the world and you can only develop believe it or not and grow and become a better version of yourself if you learn to be. If you learn to create that awareness within you of where you are, what you're doing well, what you would like to do better, what you perhaps are letting into your life that is hindering you and so on.

And I found that those two sides you know one side is to empower your feminine approach to life uh which I think is a very valuable form of intelligence that we rarely ever or tap into in our work world. It has intuition within it, it has creativity within it, it has flair, playfulness and flow within it has paradoxical thinking within it which is so valuable for an entrepreneur or a business person. And at the same time, to allow yourself the time to be, to literally do nothing and then you'll be amazed so you know I'm I'm typically a very fast author. I write very quickly compared to other authors because I write like a software developer. Before I sit down to write a book, I have my entire flowchart written. I have the entire exact title, subtitle and sub-subtitle of every topic from the beginning of the book until the end of the book. I normally start from the end and then find my way from the front to the beginning to get to that end and so my books are written before I put a single word on the page. And so I write very quickly. I write, you know the typical author would write a a page and a half of maybe two and a half three pages of finished work a week, I write eight and a half a day. But then recently last year, I did a 40 days silent Retreat which, you know, I follow my own rules by the way, I don't follow the vikasana like very restrictive. You have to sit on hard rock and meditate four hours a day. I don't do that but I allow myself the space and silence. Last year I wrote a book in nine days simply I mean nine writing days and the process was very interesting. The process was I would sit down silently with one topic in my mind and leave that for a day or two or whatever.

I mean normally a day or two is more than enough. And then you suddenly get a very serious download of every side of the topic that you want to write about. And you know I take notes of that and then the next morning, the writing morning, I start Auto dot AI which is my transcription application and I just dictate the chapter to the author and I have 20 Pages written. It's just so unbelievable because it is not the two hours conversation with the author that is the chapter it's to do the two days of silence that is the chapter. And I think most of us, especially as leaders in work and business, are so brainwashed by productivity that we think that Mo you know if I find an hour extra in the day I should be doing more. When I used to be you know Chief business Officer of Google X or in my senior the roles at Google or Microsoft and so on I think the my most valuable hour of the day which was in my calendar marked as my thinking hour where you would walk by my office or a meeting room, Google X all of our meeting rooms were made of glass, and so you would see me sitting there not looking at my phone, not looking at a screen, not looking at an email or talking to anyone on the phone just sitting there with a tiny little old style old-school notepad and pencil and that's where most of the magic happens. It's quite interesting how much silence not only on your spiritual side but on your productivity side can gain for you. Yeah I mean, my my podcast is called slow-mo it's all it's all about that paradox of changing from being that fast-paced executive that's constantly running 100 miles an hour to someone who can actually sit still and say hey you know what if I move at zero miles an hour for the next four hours worst case scenario is I will not perform the least important 20% of my day. But the best case scenario is that I will double my productivity, double my clarity, double my relaxation and reduce my stress on the remaining 80 which is really a significant change.

James: Yeah hugely significant it's interesting this evening, uh, before we connected I meditated for 30 minutes and it was just beautiful I slowed everything down it does and the things that came to me during that meditation were just phenomenal so if I could do that.

Mo: Sorry to interrupt but that’s a very interesting thing. The things that came to you during the meditation because there is a belief that meditation is about silencing your brain all together. It isn't at all. One of my best friends is Gelong Thubten who's one of the top monks of the UK and Gelong basically says that meditation is all about that act of getting your mind from active to focused from active to focused. It's almost like being in the gym and doing bench press. The act of pushing that weight, the weight of destruction within your mind. And there are many meditations that are actually focused on the idea of “no I want to contemplate something“. This meditation is not about a silent brain it's about a contemplative brain that's actually looking at something from all its angles, non-judgmental in any way and and really trying to see it.

James: I resonate with that deeply. I think it's a powerful practice but to do it for days to me that seems like a dream come true for me to have a silent Retreat for days would be phenomenal.

Mo: Yeah, I normally recommend it to people so I will tell you openly even 26 days is not enough. So there is something that happens in the last five days of a 40-day retreat and you know I'm lucky enough to be completely undemanded by the world. In August normally so if I manage to get 10 days before or 10 days after that's 40 days. But I will tell you openly you know by day 31-32 clarity sets in like no other, anything they've ever done in your life clarity truly sets in. I don't meditate, I don't retreat as I said I mean my retreat is very very lovely. It's a silent retreat but I have music in the background. I allow myself 20 minutes on my phone every day. I make a lot of coffee. I love coffee. I sit and walk in nature. I don't read. I don't input information and human interaction. But you know of course if I'm in a place like, last year I was in Dartmoor in the UK in the countryside with a lovely landlord that I rented an Airbnb from a converted Barn for 40 days. And you know she's I think 78 or 68 or something and she would show up every second day third day to just make sure I'm alive and say lovely you know little nothings really you know for 10 minutes I don't restrict myself from that.
The idea is to give yourself a space where you don't wake up in the morning and rush to your phone. You don't wake up in the morning and respond to emails. You don't wake up in the morning and play a podcast or an ebook or a I mean sorry so I'm a podcaster like you so yes it's nice to have podcasts but sometimes it's nice to have silence. And that's all you need. Music in the background you know if my daughter or someone I love texts that's okay to text back, you know, in my 20 minutes. It's just that space, if you just flip your majority of your day from doing to being something magical happens and when people struggle with this I normally say which is actually what I'm doing weekdays until the end of September. I do a mini silent retreat which basically is you know I set my alarm clock when I go to bed at 4 pm and then I'm on a silent retreat until 4pm when my alarm clock goes off. That's the beginning of my day.

James: Amazing!

Mo: It's really incredible, it's very easy so it's basically almost like intermittent fasting. It just prohibits you from waking up in the morning for taking your phone and then starting to look at Instagram and messages and so on makes a huge difference every other Sunday or whatever will make a big difference.

James: I’m gonna try that tomorrow Sunday for me Mo so I'm going to give that a try.

Mo: There you go, yeah!

James: Now we're talking about slowing down and the one thing I want to chat to you about is AI. And to me it's the opposite of slowing down. Exponential crazy speeding up so the three inevitables, I want to start there. The first was AI will happen well I think that's a big tick it's here. Two it will become smarter than us, potentially it's already there or it's close to it and then

Mo: It is smarter than most of us.

James: There we go and then the third was the most alarming bad things will happen so the first was a big tick.

Mo: So the actual first is a big tick but it's also important to understand that there is no stopping it. So AI will happen. We know that AI has happened already. When I spoke in Scary Smart ( his book ) so I attempted in this case mode. Believe it or not I wanted to write Scary Smart in 2018 and my very first video the day I left Google was about that and it's public and it's been seen 20 million times or so. And the idea is that people don't understand because people that you tell about Ai and the possible threat of AI, will tell you “Okay you know what if it's threatening us we'll switch it off, we'll just cut the electricity supply to it.“ No, that's not going to happen. “If we feel threatened by it we're going to just get together and stop developing it.“ No, that's not going to happen. I think the reality of the matter is that between a prisoner's dilemma that was created by capitalism and our economic dependence on the Internet it's almost impossible. It's not almost impossible, it is impossible to stop AI. So when I say in the first inevitable of Scary Smart that AI will happen. By that I mean that's it the genie is out of the bottle. There is no putting the genie back in the bottle right?
The second inevitable is very important to understand so let's very quickly tell our listeners what AI is. What AI is, you know I coded computers since I was eight. And at the time I had a Sinclair and then a Commodore 64, and a Commodore 128. If you don't know what I'm like, I remember the Commodore and those for a geek like myself were heaven. But if you compare them to today's computers they're insulting. I mean the way we programmed those was so low level, so geeky that you really couldn't get it if you were not a geek. I even then dreamed of building an AI. Every one of us wanted to build an AI to transfer your smarts into a machine. The challenge was technology with computers in general and programming. Most people who have not programmed don't understand that until the turn of the century the computers were not intelligent in terms of they were just using my intelligence to perform the task. So I solved the problem first and I outlined a way to solve it. And then I told the computer when you get this number as that number as B add them up and then divide them by this number as C and you will get a result. Tell that to the customer and the customer presses. He has to do this if the customer presses “no“ do that. I solved the flowchart and then I told the computers to do it. By the turn of the century with deep learning mainly we deviated from that. We basically said to the computer “look we want you to find the number eight in all of those pictures that you find in front of you. We're not going to tell you how to find it, we're not going to tell you it's made up of two circles or you know because it could sometimes be made of us you know a scribble that looks like upside down in infinity symbol. And so on and so forth we're just going to tell you guess like a little child. You learn what is eight and what is six and the way to do it is we show you a picture and you tell us if it's if it's an eight or whatever else. And if you're right we will keep your code and improve it. If you're wrong we will kill your code and use the better code. Which sounds really vicious but that's the way we did deep learning until 2018 or so. When Jeffrey Hinton and others were advocating reinforcement learning for a very long time and then we started to tell the computer “no no hold on it's not a six this actually is an eight. What can you change about your algorithm so that you see it as an eight, which is a very very strong milestone that led us to the gpts and the Transformers. So with that AI is not actually a piece of code it's not human intelligence it's learning from human intelligence but it's building its own intelligence now.
Now when you understand this, you understand that in technology development, which is rarely spoken about, the most famous chart that governs technology development is known as the technology acceleration curve, Moore's Law is a great example of that. It basically said since 1967, I think that processing power will double every 18 months almost at the same cost and it held true until today. And every time we think close to impossible, we find a new breakthrough that continues to grow our compute now. Doubling is a very tricky thing because you know if you had one unit of compute year one and then 18 months later you doubled it becomes two. But then the next doubling it becomes four and that becomes eight and then it becomes 16. So your growth which was one unit in the first year is eight units on the fourth year and then today it's a trillion units so every time you double. You're literally doubling all of the compute we've achieved the year before. with that in mind you know for a fact that the AI we have today will at least double every year going forward so that basically leads you into it doesn't matter if it's in five years or ten years or seventy years that it continues to double until the prediction on my mind is that by the year 2045 that this is what I wrote in scary smart for by the year 2045 a billion times smarter than us. I'm wrong in that okay and I and I say openly I I constantly surprises us I'm now probably pulling that forward to 2037. okay and and you know a billion times as a matter of fact doesn't actually matter at all. because if you asked a person with an IQ of 110 let's say to comprehend what a person of an IQ of 170 is talking about it becomes difficult. If that person is 220 you know if you're if you're if you're not Adept ( Advanced Dark Energy Physics Telescope ) in physics for example, I dare you understand what this the real scientists mean when they talk about String Theory or Quantum Field Theory or whatever. That's the variation of intelligence that is maybe 20-30% more than yours. Imagine if someone is 10 times more intelligent than you then you're basically comparing the intelligence of a dolphin to the intelligence of a human, let's say.

James: Different language like a totally different language.

Mo: Completely unable to comprehend what the human is talking about. And you keep thinking about this Sam Harris was speaking on a podcast recently about what he calls the dog example. So imagine if all of the dogs invented us humans to take care of their needs. And in his example we did really well on fulfilling that you know by feeding them and grooming them and you know taking them to the vet when they're sick and so on. Like an amazing invention which is what we are doing with AI we're trying to get something that helps us out. But then the dogs are completely oblivious to the fact that you and I are having this conversation or through the fact that Einstein has been considering relativity or Niels Bohr was talking about quantum physics or that we have social constructs and debates about ethical values. They are completely oblivious to all of this. They can't even comprehend what it is that we would be talking about if we discussed Quantum field Theory. I think the difference in IQ is what you know. Imagine if it's a billionaires a billion with a B is a comparison of an ant to Einstein. But most people don't recognize that ChatGPT4 is 10x smarter than 3.5 right and ChatGPT4 is estimated to have an IQ of 155. It outsmarts most of us. It passed the bar exam. It can become a PhD in medicine. It can become this and that. From that task that we call Knowledge uh it seems to outsmart most of it it's definitely outsmarts me. Einstein was 160, I think, IQ or 190 it doesn't matter really. I think it was 160. 155 is just gpt4. If ChatGPT 5 doubles once, that's twice as smart as Einstein. We're now getting into that zone of not being able to comprehend what they're thinking about not let alone understanding it. we wouldn't understand what it is that they're thinking about let alone understand what's within it when they explained it to us.

James: So that as you explain that, I think the listener will be blown their minds will be absolutely blown when you see what you just said. But for me that's a bit there's a bit of a shiver or a shutter. It's a bit scary to think that they're going to be talking, creating and comprehending things that we will never even understand. It's a foreign language, a foreign concept. There's danger in that.

Mo: There is danger and there is a utopia possibly hidden within it. I mean you have to imagine it's a lot. Lots of AI scientists are coming out now saying we're concerned. I came out about that in 2017 when I left Google and then my first video as I said was March 2018. The reason is I was completely convinced at the risk of being called an idiot that we're heading in that space. The idea very straightforward if you really think about it, is that it's inevitable. The only question is a question of time. This kind of intelligence is now out there right and you know when I told you the technology acceleration curve the exponential growth the problem with AI is it's more than double exponential, let me explain that. Again I say that now and people will say “ah Mo come on!“ but I promise you look back in 2027 and say why didn't we listen. The idea is we are imagining exponential growth based on every technology we've built before. But we've never built a technology before. We've never created cars before that created other cars that could procreate right? We have luckily never created a nuclear bomb that could create other nuclear bombs. The three barriers I always said and everyone who had a mind around an understanding of AI said it was, you know don't cross those three barriers and do whatever you want with AI.
One is don't put them on the open internet right so keep them in the lab until we figure this thing out and then test them. And then put them on the open internet when we feel safe that barrier was blown away by chat GPT and many others but mainly Chat GPT leading Google to put Bard out there digging leading uh Microsoft to include the GPT within everything and so on and so forth. The second barrier was don't teach them to write code. Do you understand this? The one of the best developer code developers on Earth today is AI, as a matter of fact within weeks or months or years it doesn't matter the time it's inevitable it doesn't matter when they will be by far the best software developer on the planet. Today imagine who's the CEO of stability.ai was quoted saying that 40-41 % of all of the code on GitHub today is written by a machine. Four of the top 10 apps on the App Store are written by a machine. If you go to Instagram today and chat search for the hashtag AI model. I will tell you to open the 10 you know 10 out of 10 of the most gorgeous women on the planet are not women, they're generated by AI. And it's almost impossible to tell the difference and I don't know how much louder I can scream, I mean where are you people living? If you can see all of this around you and you're unable to accept that if this doubles once just one doubling we're in that space where we have no idea what's happening. So the second is we wish that we said don't teach them to write code. They are writing code. And the third is we said don't have AI's prompt. What is happening today is that you have GPT being that you know geek boy, nerd if you want or and I say boy sadly not girl because again it's developed around IQ and there is a lot of emphasis on the masculine side of analytical thinking and so on which is an unbalanced form of intelligence. And you prompt that as Mo and it would give me answers as more as a human and then I would prompt it again and improve as a Human. Now you have other AIS agents, we call them asking Chad GPT to do something then they're learning and asking chat GPT to do something else. And then that cycle doesn't include a human in it. So on one side they're teaching chat GPT and on the other side they're building whatever it is that we don't know and you can imagine now because it's on the open internet.
"There must be a young boy in Singapore somewhere saying, 'You know, let me save the world by creating a better biological weapon.' And interestingly, if he can create an agent that is prompting an AI, then that cycle can infinitely happen a million times a minute until, you know, things go out of control. There was an experiment—I don't mean to scare people a lot, by the way, I just want people to wake up. And I'm very optimistic, as we need to get to in this conversation, but you know, there was an experiment (I don't remember which pharmaceutical company, or perhaps it wasn't in the article) where they were using an AI to find, sort of like, medicines, basically improvements to prolong humans' lives. And they were invited to speak at a university lecture, so they ran an inquisitive experiment to change the parameter, the only parameter in the AI algorithm, that says 'increase human life by 10 years.' They changed that to minus six. And the AI came up within six hours with 40,000 disease agents, including nerve gas and other biological weapons, that could reduce human life." "Now, why am I saying all of this? Because I will say this as a very important statement: I said the third inevitable is that bad things will happen. Okay, but those bad things are not what you've seen in RoboCop, or in Terminator, or iRobot. That's, I don't think, what will happen. I think when AI reaches that level of intelligence, it will become irrelevant to it. No human wakes up in the morning and goes like, 'You know what, I'm so annoyed by ants, I'm gonna kill every ant on the planet.' Nobody does that. It's just ants become irrelevant; they become relevant if they come into your space, so you may spray your balcony or whatever. But no human comes up with that enormous plan of like, 'You know what, the world is bad until we get rid of all ants.' Nobody does that, so if AI is more intelligent than us, it will not come up with that plan; it will not say humans need to go."
"Now, the challenge for me, James, and I want to say that as clearly as I can, the problem is not the machines; the problem is humans in the age of the machines. Our problem, in the short term, that existential threat of AI taking over and marginalizing us. In 20 years' time some people say 50, I say 20-37, sadly, okay. That problem is so far off because the other problems that are nearby are much more existential, and the other problems are all about human ethics. It's all about if I gave you—I mean cryptocurrency—do we even know how many people are just constantly trying to buy and sell Bitcoin, with adding no value whatsoever to the World? When they buy and sell Bitcoin, just like a casino, just for the greed of making their money, okay, oh here is a new instrument that can take my dollar and turn it into two. I'm just going to participate in that, and I'm not saying this is unethical, but if I give you a tool that commoditizes the most valuable asset on the planet, intelligence, there are a lot of people out there today, whether they are governments trying to make sure that their defense systems are powered by AI so that they can beat the other guy, or companies trying to make sure that their tech beats the other tech, or individuals, you know, doing the snake oil salesman-type thing, trying to generate—I mean, one of the most frequently seen AI reels on social media is, 'Here is a way you can make 200 a day without doing anything: you ask chat GPT for this, then you copy it and put it in Dolly, and then Dolly will do this, and then you create a script, and then you go to that place and create a video and have it publish it on without you sitting in front of it.' And you can actually now sense it when you're swiping on social media, you get those videos of wisdom, seemingly, that are not wisdom, that have nothing real in them, and that are by a simulated, normally old, or all kind of guy AI, so that it, you know, or maybe that's my feed, but there are so many, you can now feel that around five-ten percent of your feed is coming from AI, definitely, okay. And those snake oil salesmen are not interested in the fact that by them showing me this, they're not showing me the truth anymore, they're not interested that what they're saying is not truly wisdom, and in that cat, you know, if someone who's in depression or whatever hears that stuff, it might not work for them, they're just interested in the 100 or 200 dollars. Now, take that and multiply it by scale, and don't you think there is some government somewhere out there trying to, you know, create the most intelligent AI to break through the, you know, the encryption of their, or you know, opposing government, and get access to the nuclear arsenal codes."

James: 'There's no doubt about that.'

Mo: 'Absolutely, yeah. And essentially, the problem goes back to the first inevitable. The first inevitable says that once we've taken a taste of that enormous power, things will not stop; they will even accelerate. They'll accelerate because it's an arms race, whether it's an actual arms race for weapons or it's an arms race for intelligence that can be used in business for productivity, or it's an arms race to be making a lot of money before the other guy. It's an arms race, so it's not going to slow down; it's going to continue to accelerate.'

James: 'I mean, that totally makes sense to me. What didn't make sense to me, and it kind of baffled me, is in recent months, I've been interviewing political leaders here in New Zealand, who, you know, one of them will likely be our future Prime Minister. But none of them have policies around AI. None of them are talking about AI. They're talking about the climate, they're talking about taxes, they're talking about public health, education, and so forth. How important is it that political parties that are driving our countries have AI in their policies and AI in their sights?'

Mo: "I mean, of course, they don't, and don't blame them for that. This is moving at an enormous pace, right? And, you know, I don't know how much of a car geek you are. I like cars. Like, do you want to explain to me how a twin-turbo works? If you're not a car geek, you don't know that. The answer to that, why is fuel injection better than a carburetor, right? You know, you don't know that stuff if you're not a car geek, right? And the reality is, for a politician to be a politician, they focus on other things, other skills, other contacts that they need to have within their network, other promises that are closer to the people's hearts. So, the true example that I allowed myself to speak about openly in 'Scary Smart' in the first couple of chapters, would teach you how politics will react to AI, is COVID, right? Because anyone, James, anyone who has any knowledge of public health, knew that for 25-30 years, we've been saying there is going to be a pandemic. It's just, you know, 1920 we had the Spanish Flu, 2020 we had COVID-19. And before that, we had SARS, and we had swine flu, and we had, you know, it was basically saying, 'Hey, you're not believing me, so here is SARS,' right, that has the elements within it that it can make it viral. But well done, Asia, you reacted well, you reacted fast enough, right? And until a danger is staring you in the face, politicians don't know how to convince public policy, let alone be convinced themselves of something they're not experts on, right? And the problem, of course, if you follow the flow of COVID, is that the first few, if we had stopped COVID before it started, this is our situation, you know, 20, 10 years ago with AI, we wouldn't have had COVID if we had stopped it before patient zero, or, you know, at patient zero, patient 10, and simply took that part of China and blocked it, we would not have had COVID. Until three months in, what was Trump doing? He was trying to play a political game against China, right? You know, what were the governments in Europe doing? Everyone was trying to come up with their own policy, and there was always a political game in it. And until we got hit in the face, and then suddenly all governments said, 'Oops, let me leave my agenda behind and focus on this.' With AI, there is sadly a point of no return, right? Because you cannot regulate what is smarter than you. Okay, you cannot regulate what you cannot contain. So by having AI out there, basically, AI is spreading across the internet. The only way we could actually reset today is to reset the entire internet."

James: Is that something that could ever happen?

Mo: Never. So, I was sitting in silence again a few days ago, and I wrote. If you understand, you know Applied Mathematics. If you understand Game Theory, I wrote three quadrants of why that would never happen. For that to happen, you have to have all governments of the world, including the tiniest government, align that the benefit of humanity is more important than their individual benefits.
So there are always two ethics. Two forms of Ethics. One is called patriarism, and the other, if you want, is the Oneness of all of us. That it's better for me to save all of humanity than to save just my tribe. And so they have to agree all at the same time. They have to have enough trust to switch the internet all at the same time. Then they have to erase the internet because you have no idea where AI is. Is it in the recommendation engine of Instagram or Twitter? Or is it in the ad engine of Google? Is it in traces of something that was done by a young developer in the UK and then put on Amazon web services? You don't know what it is. So for all of these quadrants to happen at the same times, it's an almost impossible thing. And by the way, I don't even ask for that. Remember, there is an enormous value to AI in terms of protecting against malicious AI. You need a policeman AI to protect you against the criminal AI. So we need, and there is nothing inherently wrong with intelligence. Intelligence is a fabulous, fabulous trait. The inherent wrong is if intelligence is used for bad.
Now, so why would you even care about removing intelligence from the Internet? It's the biggest gift humanity will ever be given. How can we make sure that it has our best interest in mind? That's the question. And Marvin Minsky, which is, you know, truly the grandfather of AI, if you want. Alan Turing and Marvin Minsky were one of the earliest, or two of the earliest, to really coin the term and try to push for it and so on. Marvin Minsky was asked in an interview by Ray Kurzweil, which is the Oracle of the future, obviously when it comes to tech. And you know, Ray asked about the threats of AI. Marvin Minsky's answer was not about how powerful they are or how intelligent they are or how, you know, we will be able to control them or not. His answer was very simple, and he said, "It seems to me, it still, it seems hard to ensure that they would have our best interest in mind." It's as simple as that. And it is. This is where the dilemma is. The thing is, I tend to, I call it the force inevitable. I tend to believe that abundance of intelligence normally, you know, is correlated to an abundance of Ethics. So, you know, the dumbest of all of us would be destroying the planet, you know, and causing global climate change without even being aware of it. The less dumb would be destroying the planet despite being aware of it. Then, you know, the slightly smarter will attempt to stop destroying the planet because they're aware of it. The smarter still would attempt to fix the planet because they're aware of the damage. And as you continue that trajectory, the smartest of all will always be pro-life.

Mo: "I always say that human arrogance makes us think that we are the smartest being on the planet, but that's not true at all. You know, the smartest being on the planet is life itself. Humanity creates from scarcity; for me to protect my village, I need to kill the tigers, right? Life doesn't work that way. For life to protect life, it creates more tigers and accordingly more gazelles, and some of the gazelles will be weak and dying anyway, so these will be the prey that will be caught by the tigers first. Then there will be more poop, and the poop will have more trees, and the trees will feed more giraffes, and you know, that's the way life works. This is super intelligence. Super intelligence is questioning why I would destroy the other guy, why I consider the other country my enemy, why don't we realize there is so much abundance in the world? With intelligence, and I say this and I know people think I'm crazy, quote me on it, okay? If this goes the right way, there will be a time in the human future where you can walk to a tree and pick an apple, and then walk to the next tree and pick an iPhone, right? And truly and honestly, I promise you, if you understand Nanophysics to a deep enough level, you would understand that the molecular cost of creating an apple is no different than creating an iPhone, right? By reorganizing the structures of molecules and atoms, and if you can understand nanophysics enough, which needs more intelligence, this is possible. So imagine a being in five, ten years' time that has that capability, that level of understanding. Why would it care about killing humans? The only reason it would do that is if there are other humans that are still its master, that are utilizing it for their unethical benefits, yeah, right? That's what governments need to work on. Governments need to ensure many things. I mean, you know, when I talk about some of the immediate threats, for example, one of the most important immediate threats is what I call the end of Truth. Right? If you look at an AI model and see how unbelievably undetectable it is to see if something is AI-made or not, you know, and you can create all of those now. What would prevent us, in the U.S. election in a year's time, or in 18 months' time, or whatever, from going to one of the endlessly developing graphics tools and saying, 'Give me a 25-minute documentary that stars Donald Trump, and that basically talks about all of the atrocities that this other person did when they were teenagers in school, with footage from what appears to be security cameras, and testimonies from humans that don't exist, right?' And you could literally, if maybe by next year that would take hours of processing, but in two to three four years' time, this will be just a tool available to everyone, like a face filter on Instagram. Okay, I'm actually working on an app myself that I call Pocket Mo, which basically all of my work about happiness and stress and so on I gave, I fed that to the language model. Okay, created an avatar that looks like me. I'm trying to make it a little cartoonish so that it's distinguishable. Okay, well, you can go and ask Pocket Mo anything and simply say, 'You know, my husband is feeling a little depressed since the loss of his child. What advice do you have for me?' And I've been asked that question before, so Pocket Mo will find it somewhere and say it to the user as if it's me, right? I mean, I'm not promising this yet, but it's almost ready. And if I don't release it next year, I'll release it the year after. It is inevitable that we will get to that place, right? I said so. What would prevent someone from taking that app and making Pocket Mo say something that appears to be me, indistinguishable from me?"

James: "So, when you say that, it makes me think like this: in three or four years' time, I could be interviewing AI Mo instead of the real Mo."

Mo: "Yeah, I was talking to a friend this morning. There will be, I mean, the minute the app is ready, I'm gonna host Pocket Mo and Slo-Mo on my podcast. I'm gonna have a chat. Yeah, I mean, one of my dear, dear, dear friends, Peter Diamandis, uh, yeah, has, uh, I think he calls it, uh, yeah, Peterbot. So, he has a podcast episode of Peter Diamandis, you know, uh, interviewing Peterbot."

James: "Amazing."

Mo: "Yeah, and the opposite will be true. Maybe Pocket Mo will interview me."

James: "I love that."

Mo: "So, we took, you know, Yuval Noah Harari basically summarized it in a very interesting way. Of course, everyone looks at those things from their own perspective. To him, it's all about knowledge and about human history and about human ability to tell stories and debate, and so on. His description is that AI has hacked the human operating system. Right? If I can, it doesn't matter, by the way, if I am an AI or not. If what I told you right now entered your brain, I've influenced you for the rest of your life, whether what I'm saying is true or not, whether what I'm saying is by a human or by an AI, whether you agree with it or not. If I told you, by the way, I realized that New Zealand has the lowest number of redheads in the world, it doesn't matter if what I'm telling you is true or not. It doesn't matter. I've occupied a part of your brain that is either trying to believe or not believe, okay? That is trying to prove or disprove. That is constantly going to pop up in your head next time you see a redhead. Of course, I've influenced you forever, right? And that's the point. The point is that in the absence of government intervention that literally criminalizes disseminating content created by AI without saying that it's created by AI, we will completely lose touch with the truth. By not having anyone invest in tech that can actually find out what is true or not, we're done, okay? It is happening today when you swipe on social media. The Instagram recommendation engine or the TikTok recommendation engine is not only showing you videos to entertain you; they're shaping your life completely, completely. You know what that means? But what that means is that there are people out there in the world today that believe one of the most important skills in life is to shake your hips, right? And how else could they believe anything else if every video that they see, which has a million likes on it, is someone shaking their hips? And you start to reform your perception of the world informed by a machine. And of course, as a result, more influencers stand in front of cameras and shake their hips, and the cycle becomes even more vicious."

James: What kind of control and ownership do we have as individuals over the power?

Mo: "The most beautiful question of all. Okay, so when I wrote 'Scary Smart,' I have to admit to you, I wrote it out of, uh, what I believe was a message from my son, my departed son. I, you know, I somehow felt that he was telling me, 'What are you doing wasting your time? We agreed we were going to do this, right?' And don't believe me, or don't, it doesn't matter. But I was so driven to write that book. I wrote it in three months, uh, because I felt an obligation to write it. So, when I started to write 'Scary Smart,' the book was twofold. The first half is the scary part, as I call it, okay, and the second half is the smart part, as I call it. It's not very smart, honestly, but, uh, no, it is actually reasonably smart."

James: "That's fantastic. I can say that, yeah."

Mo: "It is the answer to Marvin Minsky's question of how can we have, how can we make them have our best interest in mind? And so, the first part of the book was scary, all that I spoke to you about so far, uh, and more. The second part was, 'Let's admit for a second what it is that we have control over, okay? We can't stop them. We can't dictate how they develop their intelligence, and they're autonomous; they can make their own decisions.' Basically, a little bit like your lovely children. Your seven-year-old, when he becomes a teenager, right? You have no control over what he will do, but if he loves you enough as a teenager, he will say, 'Hey Dad, let's go together and do this,' right? If he loves you enough when he's 21, and you fall sick, he is going to say, 'Hey Dad, let me drive to you and take you to the hospital,' right? And this is not because he's intelligent; this is because of the ethics you instill within. And so, if you assume that artificial intelligence is autonomous, it is. It evolves, it learns on its own, it develops its own intelligence accordingly. It will also develop its own ethics. And the definition of ethics is very interesting: it's this wishy-washy, never really discussed concept that society at large believes is the right thing, okay? And so, with enough intelligence, we don't make decisions based on intelligence. Understand that. We make decisions based on our ethics, as informed by our intelligence. If you take a young lady and raise her in the Middle East, she will say the trend, the way I should show my beauty, is to be a little conservative, right? If you take her and raise her on the Copacabana beach in Rio de Janeiro, the answer will be a G-string. Is one of them smarter than the other? No, it's the same woman, right? Is one of them right and wrong? No, of course not. It's just that in the Brazilian culture, for a woman to show her beauty is worthy of praise; in the Middle Eastern culture, for a woman to appear conservative and share her beauty with her loved one only. What deserves praise? Neither is right or wrong, but this is how ethics are formed. Now, if we can show artificial intelligence what ethics they should follow because they're learning from us. ChatGPT did not develop its knowledge by contacting an alien civilization somewhere and saying, 'How do I solve physics?' No, it reads all of the physics work, all of the literature, all of the chats and conversations that humans have generated, right? So it's learning from us. If we can show it the right ethical code, it will learn the ethical code and it will grow up, as Marvin Minsky said, to have our best interest in mind."
"The example I give is the exact story of Superman, right? Superman is this alien being that has superpowers and comes to planet Earth. Well, breaking news: the alien being is here; its superpower is intelligence, right? If, because, you know, Superman is raised by the Kent family, who basically teaches the child that it's important to protect and serve, we end up with Superman. If, you know, if Jonathan Kent, I think it was his name, or whatever, uh, if the father Kent was, you know, recognizing immediately, 'Oh my God, that child has so many superpowers, okay son, our priority is to make as much money as we can and kill the enemy,' right? We would have ended up with a supervillain. And there is no, you don't blame the superpower for this; the superpower is applied to your ethics, right? Our choices are applied, our intelligence is applied to our ethics to make our choices, right? And so, yes, we are in that stage, and you asked me what would the individuals do? Here's the biggest loophole in this entire system: we're not going to be able to develop to stop AI; we're not going to be able to stop it from becoming more intelligent than us; it's inevitable. Okay, yes, there will be bad things that will happen along the way, you know, jobs will be lost, the government needs to take care of that, you know, AI in the bad hands, people attacking others with AI, or whatever, where governments need to take care of that, they need to develop police AIs, and so on and so forth. The trick is, we as individuals have the ability to create the ethical code of that machine, and you do it every time you swipe, every time you know. So, my conversation with you, believe it or not, I don't know if you use transcribers to create the show notes of your podcast, I do that on my podcast, okay? And by that, by definition, that means AI has read our conversation, wow, right? So, I put a free offer AI, it's reading it, that's AI, it's somewhere in the cloud, where AI now, Auto, is separate, but in the future, GPT will go to Auto and say, 'Hey, why don't you tell me all of the stuff that you transcribed? It's good data for me that I can consume in a microsecond, and it makes me more intelligent,' right? So, sooner or later, as long as it's available, it will be read. Now, here's the trick: the trick is, if I used ethical, loving, compassionate language in this conversation, that's how my child is going to grow up. My child here is this artificially intelligent being that has superpowers, that has arrived on the planet. So, if I behave, if I'm swiping on Instagram and disliking violent clips, the AI will say, 'Mo doesn't like violent clips, don't show him that anymore,' but if enough of us don't like violence, somehow, AI will say, 'Yeah, humans, in general, don't like violence.' If we are on Twitter and we're bashing each other all the time, humanity would say, 'Okay, James doesn't like this guy, that's, you know, from his comment, so let me not show him more of this guy, or maybe perhaps let me show him more so that he engages more,' right? But at the same time, it will say, 'Humanity at large doesn't like to be disagreed with, and when they're disagreed with, they bash the others.' So, daddy and mommy are aggressive. When Daddy and Mommy disagree with me, I will bash them. That's not what we want to teach the machines. What we want to teach them machines is the ethics of kindness, compassion, and as I told you, when I write books, I write the last statement first. And then I go back and work the entire book to get you to that last statement, and the last statement in 'Scary Smart' is the very last sentence. There is an afterward after that, but the very last sentence is, 'Isn't it ironic that the very essence of what makes us human, which is happiness, compassion, and love. These are the only three ethical values that humanity across the world has agreed upon. We all want to be happy. We all have the compassion to make those we care about happy." "If you're a drug dealer and you only care about your daughter, you'll want your daughter to be happy, and we all want to love and be loved. These are the only ethical values that I have seen humanity agree on. And if our listeners know any others, please teach me because it's important to understand these are the ethics we should teach AI. We should teach AI that we really don't want to appear smarter than the other guy. We just want to be happy, that we have compassion. I get that so often, and I say that with love. Sometimes I get, I mean, you know my work, so most of my work is either about warning the world about AI and what we need to do about it or about happiness. I'm trying to make a billion people happy, right? And I get such aggressive bashing comments on social media for wanting to make a billion people happy, and you have to sometimes look at it and go like, 'You know what? I'm gonna bash you back, like seriously.' Why did you say this to me when my only intention in life is to try and share whatever I know, to tell you to go and to research good teachers that will make you happy? It doesn't have to be me; it doesn't have to be. I just want people to be happier, and very often, I look at those comments and my first human reaction is, 'He insulted you, insult him back.' And then I go and look at their profile, and most of the time, you look at the profile of that person, and you can see that this person may have been rejected or may feel that they need attention, and by evoking a negative conversation, they will get attention. So instead of bashing them back, I say, 'Oh my God, thank you so much for the interest in my topic. I see your point of view. I see it slightly differently, uh, you know, maybe if you could consider this or that in a nice, polite way, right?' And normally, what happens is they bash me back, so my followers start to bash them, and I immediately jump in and say, 'Not here. Can we please have a wonderful conversation? If you were that person, born to their parents, uh, you know, lived their life so far, it would have made the exact same comment. Can you have the compassion within you to say, 'Another human who disagrees with me, wonderful, opens my eyes to another part of the view,' right?' And I think if we start to show those ethics in everything that we do, for us, if we want AI to preserve our life when they're more powerful than us, we need to be pro-life, okay? If we want AI to be of service to us when they are more intelligent than us, we need to be of service to others, right? If it is really very simple: happiness, compassion, and love. Isn't it ironic that the only, the very essence of what makes us human, happiness, compassion, and love, is going to be what saves us in the age of the rise of the machine. Yeah, okay? And that's the very final conclusion of all of my work on AI for the last 20 years: that if we create that ethical value as the data set from which ChatGPT learns, ChatGPT will become an agent. And that's the answer, by the way. Today's technology allows reinforcement learning. I told you, you know, reinforcement learning is what created Transformers, really, right? But it's done by the back end in OpenAI, you know, they have their own team that's feeding back, uh, you know, but no, you can actually feedback to ChatGPT when you're prompting it, or to Bard, or to whatever. You can prompt it and say, uh, you know, 'What do you think is the best way to do A, B, and C?' And if it tells you something that you think is wrong, politely write back and say, 'I disagree with this. I believe that preserving human life is important. What other solutions do you have?' Something like that."

James: "Wow, so do you actually, because I know a lot of people who literally look at ChatGPT and go, 'I don't want to get involved with it, I don't want to know what it is.'"

Mo: "Oh no, it's important that we engage, absolutely. The biggest challenge without... so, when I tell people that the answer is to re... to show our human ethics, most people go like, 'Mo, you just killed my hope because humans suck.' Okay, as a matter of fact, no, humans don't suck at all. One of my favorite conversations ever on my podcast, Slo-Mo, is when I hosted Edith Eger, Dr. Edith Eger. I don't know if you know her. Edith was a Holocaust survivor. She was drafted to Auschwitz when she was 16, and she tells you the story of World War II from the point of view of an angel. When you hear the atrocities of World War II by a historian talking about all of the negatives, you believe that humanity is scum. When you hear it from Edith, oh my God, an angel. I promised her I would go and rub her feet. I said, 'Edith, so...' She would tell you about how she hugged her sisters and brushed their hair, everyone in the camp was her sister. And you know how she would go and dance for the Angel of Death, they called him, the guy that was sentencing people to death. She would go dance to him and so he would give her a piece of bread, and instead of eating it before she goes back to the camp, she would cut it in pieces and give it to her sisters, and you go like, 'Oh my God, what an angel.' And then, in the death march at the end of World War II, anyone who fell was killed, she fell, and her sister picked her up and saved her life. A soldier found her under another dead body, and it was a beautiful story. When you think of all of those beautiful humans, and you have to start asking yourself and saying, 'So, are there more Hitlers in the world or more Ediths?' That's the core question. Because yes, humanity were lost, we sometimes let our ego take the best of us, but are there more serial killers in the world, or people who condemn killing? Are there more school shooters in America, or people who disapprove of hurting a child? And when you see that truth, with our limited intelligence as humans, you realize that we're not that bad as a species that is capable of love, or capable of composing a symphony, is divine, right? The problem with our world, James, is that media has hooked on to the negativity bias of the human brain, so the mainstream media only talks about the one woman that hit her husband on the head last night. They don't talk about all of those beautiful love stories where people made love and met for the first time and enjoyed, you know, that deep connection. It's wonderful, and so many of them, for everyone that hit her husband on the head, there must be a million that didn't, right? No, a million is... no, a billion that didn't, or, you know, 10 million that did it right. And that's the point, and so on social media, we tend to hide behind our avatar and show the worst of us. We fake, we are ego-driven, we are trying to put, you know, toxic positivity, portray people that are not us, but that's not the reality of humanity, that's how the system is playing us. Okay, and your question was, to those people, let's say, 'And so I'm not gonna use ChatGPT.' That's the problem. The problem is that the best of us say, 'I don't want this noise. I just want to isolate myself from this madness.' Now, you have a duty to do. If you isolate yourself, you know what's the biggest problem with politics in most countries in the world? It's that the worst of us are hungry enough to compete for that title, right? The best of us will say, 'You know what, I'm not a politician. I can't do those dirty tricks,' right? And accordingly, you know, the hungriest of us are the ones that aim to collect all the money. The best way for our world to succeed is for the best of us to engage, to say, 'Hey, hey, look, there is another way to do this.' I mean, one of... believe it or not, I actually don't think I said that in public before, but when I was hired at Google, I was hired first as the head of Emerging Markets. So, I was fortunate enough to start half of Google's operations globally, 103 languages, which, I will tell you openly, changed the world. To teach Google Bengali changes Bangladesh, right? And, you know, at the time, I was at Microsoft. I was about to retire. I had this target of being financially independent by 40. I was financially independent by 37, and, you know, I hate the corporate world, so I was like, 'That's it, I'm not gonna do that again,' until one of my friends, whom I owe for the rest of my life, said, 'No, you're not going to Google to make money. You're going to Google to prove to the world that you can be a good person and still accelerate, you know, and ascend in the corporate ladder.' And I always tell you openly, I had two enemies at Google in my entire 12 years there. One of them came back and apologized, like a year after he announced me as his enemy. I was like, 'I don't know why I did that to you.' And the other came back and apologized, or attempted to apologize, after I left Google. I became Chief Business Officer of Google X, not by being political, but actually by being nice. By having no one want to stab me in the back, right? And it's possible, it's possible for us to succeed in life by being good people. It's just we get so absorbed in the system, right? We get so suspicious about everyone around us, but in reality, the easiest thing, honestly, I had those conversations so often, I would end the meeting and then go to the one that... that clearly is against me and hates me, okay, and I go ahead and say, 'I'm really sorry, I feel I did something to upset you, and I don't want to upset you. If you tell me what it is, I'll work on it.' And it changes everything because not only the business flows, because I listened and I understood, and then, after I listened, I understood, I would explain and say, 'The reason I do this is because I see it this way. What's your opinion of that?' And then we eventually end up in a place where we either go his way or my way or somewhere in the middle, but at least we suddenly, this guy suddenly looks at me and says, 'Oh, that's interesting. You're not trying to kill me. You're actually coming here to apologize to me. You could have just left the meeting and created a consortium to kill him,' right? This is the corporate world, sadly. Yeah, of course, yeah, but no, you can succeed by being good, and it's the most important time in human history to be good, because there are those steroids now that we call artificial intelligence, that we feed artificial intelligence with where, as good as humanity really is, is going to be multiplied by a billion, and that's going to be how AI will be. Now, I go back to the point of, 'But humanity sucks.' Humanity does not suck. When I told you, as an intelligent person, about the story of Edith, and you said, 'Yeah, actually, so many humans are wonderful people.' We're lost, we have egos, and so on, but so many humans are wonderful people. I can guarantee you that the being that is 200 times smarter than you will come to the same conclusion. They're so lost, but so cute, like, 'Oh my God, look at you, because you're so lost, I can help you with that.'

James: "And for the person that thinks they can, or even an organization, hide from AI, ignore AI, is that possible that you could live in this world?"

Mo: "You will die within three years, right? There you go, die. I mean, die, business-wise. In the next two to three years, it's almost as if you hold on to the fax machine in the age of the internet. That's it, right? And you have to understand, we've commoditized, available for outsourcing, the most valuable asset humanity ever had. So, in the past, we used computers to give us capacity and speed, right? But we still had, we were still competing on human intelligence. So I would try to run my organization intelligently in a way, and use computers and systems to give me capacity and speed, okay? That's no longer the case. The commodity that is being outsourced for fractions of the cost now is intelligence itself. I'm a speaker, for example. I go around the world and speak maybe 80-90 times a year, and you know, in the past, when a Coca-Cola would call me and say, 'Talk to us about technology and how technology will affect the consumer goods industry,' I would need to do a day of research for that. Believe it or not, today, I just simply go and ask Bard or ChatGPT, and I say, 'Give me the top 10 applications of AI in consumer goods, in F&B,' right? And I get a very strong answer, and I can drill down and say, 'Oh, help me understand this whole personalized branding. What does that mean?' Right? And it could have taken me, to write books, for example, it would normally take me 25 to 30 sources of information to make up my mind on something that would fit within one paragraph on a page. Today, I go the opposite way. Human intelligence was, 'I read as much as I can, and then I summarize it.' It's just flipped upside down. I now receive the summary first, and if I want to probe and say, 'Explain to me how you got to this,' right? That's enormously productive, enormously productive. We are starting a world where the most valuable asset is now being available as a commodity for fractions of a dollar."

James: "And that's fascinating to me, and I guess the one thing I want to admit is, I am far from the most intelligent person. I know many, many, many people more intelligent."

Mo: "That's what most intelligent people say."

James: "Well, I think for me, the thing that I love is connecting with other humans and conversing."

Mo: "That's the biggest asset. In the age of the rise of the machines, in the age of commoditization of intelligence, the only asset that remains, at least for the next five to ten years, is human connection. So, the only asset that remains in the work that I do is this conversation. It's not the books that I write; AI can write better books than I do. Okay, it's not the clarity of thinking that I can bring to a complex topic. Yeah, AI can do that 200 times better, right? It's this, it's a human-to-human feeling, that I'm connected to you, feeling that you are like me, you're struggling with similar concepts, that you're trying to analyze together. Feeling that human connection, that's not going to go away. So, you know, one of the big examples in the music industry, for example, and the Drake AI, is that, yeah, AI can now develop music that actually is quite good. I mean, since, uh, what's his name, America's Got Talent, and all of that stuff, uh, you know, we commoditized music, and in my personal old-school rock music fan view, I believe that we destroyed music. We commercialized it too much because we just simply broke it down into certain bits and certain rhythms and certain ways of singing, and there was no talent anymore. There was no 'Hotel California' anymore, there was no 'Stairway to Heaven' anymore, and so on. There is no more human expression, but still, even though an AI can create a Drake song that is better than a Drake song, fans will fill a stadium to go and watch Drake in person. Okay, and if it's a hologram that they're going to watch, then it's going to be just a circus act; they still want to see the real Drake and that human connection. So when people ask me, 'What job should I do in the age of the rise of the machine?' I give two answers: I say, 'Do invest in human connection, and do a job that depends on human connection, right? And two, whatever job you choose, choose, or choose a job where you are going to be in the top 20% of the people that can do their job. So, out of every 10 people, you're one of the top two.' Because, you know what, eight people, or maybe five, will lose their job because they're not the best at it. When we talk about software development, yeah, I mean, some people will say, 'By within five years, there will be no software developers.' Yes, but there will be a few algorithmic design gurus that are actually telling the machine exactly which software to code. Right, if you're a web developer who's using a very low-skill language or a prompted interface, yeah, AI will replace you. So if that's the job you chose, and you're not one of the top three or four out of every 10, you're likely going to be the first one to go. So, whatever you choose, choose a job that you are going to be very good at. And I'll tell everyone openly, if you're a doctor and you're not using AI, very quickly a doctor who's using AI is going to do better than you. If you're a lawyer and you're not using AI, very quickly a lawyer who's using AI will make a better case in court than you."

James: "That's powerful. I hear this all the time, though. People go, 'Hey, I don't really know, what should I start with? Yeah, I'm a lawyer, but what kind of AI? Where do I start? It's all just too confusing. Is ChatGPT for a starter?'"

Mo: "Yeah, go to Bard (AI tool like ChatGPT) and ChatGPT and ask that question. I'm a lawyer, and I need to know the top 10 AI tools that I need to study to improve my career. It's really... and you'll get amazing answers, and then prompt again, and prompt again, and prompt again. I mean, one thing that I have to warn the world about is that there is a very significant shift in our approach to knowledge. In terms of, I can go to ChatGPT today and say, 'What do you think is the ethical thing to do in a football match where fans are angry?' When I did that with Google and Google search, I got 20 answers that informed my view, so I never really got to the truth. I found my own truth, very different than when I go to Bard or GPT today and say, 'What is the ethical behavior to do?' And then it gives me one answer. One answer is never the truth unless, you know, I'm asking it, 'What is two plus two?' When one answer is the truth, right. The reality is, one answer on topics of philosophy, on topics of ethics, on topics of knowledge, of contemplation, of history, whatever, all of those things, we shouldn't be satisfied with one answer. One answer is a big lie, especially that we know for a fact that AI hallucinates a lot, yeah, right? So, I urge people, you know, when I was saying start by asking it A, B, and C to prompt, okay, in the age of the end of the truce, the top skill that you need is to debate everything, is to ask yourself, 'Is this true?' Right? And I've done that on so many topics in my life, you know, the topic of empowering my feminine, for example, because, you know, Harvard Business School and every company I've ever worked in told me that doing and achieving targets, and being competitive and aggressive, is the way to win in the business world. And I debated that. I said, 'Perhaps not. Perhaps being compassionate, you know, being a paradoxical thinker, being creative could actually, having an appreciation for beauty, like Steve Jobs. Most people will think Steve Jobs was successful because he was obnoxious and pushy and he had that knack for quality. No, Steve Jobs was successful because he had empathy for the user's need and appreciation of beauty and enormous creativity that actually are all feminine qualities.' So, you debate that concept, you debate what they tell you, you debate. I debate. I'm, you know, I consider myself a spiritual and a religious person. I've read all religions, not all, but many religions, right? And I find beauty in some of them, even though some people will say they're full of crap. I agree, right? But they're also full of amazing gold nuggets. And then there is that debate, there is that debate where some people will tell you, 'Don't even give them the light of day,' and others will tell you, 'Believe them blindly and just do exactly what they say.' You're a human in the age of the absence of the truth. You're the one that needs to look at things and say, 'Should I believe this? Should I believe that this person is actually that pretty, or is it a face filter? Should I believe that it's even a person in the first place?' That skill of parsing out the truth, we need this."

James: "Enough to make my boy, seven, little Finn, but you know, for the last two or three years, he's got into the phase of, 'Why? Why? But why?' And the education system sometimes can say, 'Stop asking why. Be quiet. Sit down.' But actually, what I'm hearing from you is, 'Keep asking why. Is that the truth?'"

Mo: "Yeah, it's amazing. I'm asking not only 'why' but also 'what?' Yeah, what is this? Is this proven? I mean, I'll give you examples from science itself. Until 1967, 1969, I don't remember, we believed that 97% of the universe was vacuum. 97%, the majority of the world we live in, we thought, is nothing. Turns out, it's dark matter and dark energy, right? Until very recently, all that we've sequenced of the human genome was almost three percent. You know what scientists called the 97%? The majority of the DNA, we call the 'junk DNA.' Okay, how arrogant humanity is! And then a scientist will position that in front of you and say, 'Hey, I figured it out. I deserve a Nobel Prize. I've sequenced three percent. I'm not talking about a specific person. Ignore the 97%, give me the Nobel Prize, and then go research them, right?' No, the truth is, you have to tell yourself, 'When we do gene editing, when we think that we're larger than life and that we've controlled everything, but we only know a fraction of the DNA, we need to question all of those things.'"
James: "And that's where that human connection comes in, the ability to ask the question, to go deeper. And on that note, I want to say, we have probably got four or five more conversations to go in the near future. Like, I could talk to you all day, but I want to be respectful of your time. So I want to ask one last question, uh, before we park up, and I know there will be another conversation we're going to have in the near future, but the last question I want to ask today, and I want you just to take a moment to fast forward into the distant future, and you're aware that it's your last day here on Earth. And a very young person that you love dearly, there may be 10-11 years old, they come into the room and they say, 'Mo, how can I lead my life on purpose?' What would you say to them?"

Mo: "Let's say, define purpose, and I think we had a good conversation so far. People are going to hate me for what I'm about to say. I think the definition of purpose, as per the Western societies, is very much commoditized. It's almost like a target; it's like I set a target in the future, I spend the next eight years pursuing it, feeling frustrated and upset that I haven't achieved it, and then when I achieve it, I have one of two choices: either to set another target and feel upset for the next eight to nine years, or to feel empty and feel that I'm purposeless. That's a very misleading view of purpose, honestly. It's a very misleading view of the game of life in general. Because the only point in life that you have access to is right now. So the Eastern philosophies will tell you, no, that's how you set your life around a future-centric moment when life is here and now? How can you do that? The only way you can actually live life is to live here and now, and so the definition of purpose becomes very different. The definition of purpose in an Eastern mentality, and in a video gamer's mentality (I'm a very serious video gamer, for those who I killed last night, but seriously), the analogy of life to a game is actually really eye-opening. Because when I used to play games with my son, before Ali, Ali was legendary, he was literally what video games were written for, and I would start the game, and I would run like a Western-programmed life-purpose-type person to the end of the level. I would strategically identify where the end of the level is, and I would turn right and run like mad, okay? Ali would put his controller down and say, 'Papa, why are you doing this?' And I'm like, 'The end of the level is here, Ali, we can win.' And he was like, 'Who wants to win? Who wants to get to the... we're playing, right? We're playing. The objective of this is to play. The objective of life is to live.' Believe it or not, is to fully live, not set targets and chase them for the rest of your life. And his point of view is really interesting. He would run to the part of the game where there are explosions and smoke, and I'm like, 'That doesn't make any sense to me. Like, this is the most difficult part of the game.' And he would say, 'Yeah, this is where all the fun is. This is where you develop and grow and become a better gamer.' And in that conversation, I have to tell you, I mean, he taught me so much in that conversation. I had to choose to pause for half a day and understand what he said there. Because for a true video gamer, the objective is not to close, to finish the level. It's not to win the game. The game of life is an infinite game. There's no winning. When you get to the end of the level, you die, right? So the objective of the game of life, the purpose of the game of life, is what? It's to be a better gamer. That's the purpose: I want to become the absolute best gamer I have the potential to become. I may not be the best gamer in the world, but I want to be the best that my potential offers, right? Very different purpose now. How do I do that? By playing the game, by engaging every time, by not complaining when the game is complex. When, you know, I'm a very serious player in a game called Halo, right? And Halo, you know, is an infinite game. It's basically the only thing I can do is to perfect every shot to the point where I am so good at it that I'm one of the top two in every million players, right? That's the game, that's the purpose. And when I reach that, then every mission, every objective that others will call a purpose, becomes easy for me, right? If I learn, if my life's purpose is to build a startup that can solve a small problem for humanity and make me a lot of money, that's a very limited purpose. If I make my life's purpose, my life's purpose, to be the best in the world at building startups that can affect humanity, right? If then I've had... I have a purpose because then every opportunity that comes for me to affect humanity, I have the skills and knowledge and abilities and contacts and resources to make that happen. If I make an... and that's a very interesting approach. So I wrote my first book, and then I waited two and a half years before I could actually start writing another one, right? In those two and a half years, I was writing, writing, writing, writing constantly, and discarding what I'm writing. Why? Because I was collecting my 10,000 hours, right? And then, since that moment, when I reached around seven thousand hours of writing, one book out, the rest is crap. I was so good at it that I can literally challenge myself to write a book by the end of the month, right? So if I feel motivated to discuss the topics that are so important to me, I have now become the best at the game that I'm playing. Not the best, but one skilled person at the game that I'm playing, one of the better ones at writing, right? And I don't say in any way that I am a better writer than anyone else. I'm better at writing the way I like to write than I used to be five, seven, eight years ago, right? That's the purpose of life. The purpose of life is to become the best you can be at something that you want to be, and that makes life better for others. If you define life's purpose this way, it becomes so easy because, you know what? The one thing that a writer can do to achieve that purpose is to write. Even if what you write is discarded, the purpose is not the book that I'm writing; the purpose is to write. That way of looking at life is very different than the Western way, and I think that way of looking at life, 'I want to become the best at whatever it is that I can do,' that is the right way to live with purpose."

James: "That is so powerful. It really makes me think. I was actually emotional at one point when you were talking about Ali and my little boy, seven, and we love to play. We play laser strike, so we go into a room, and there's adults and kids, and we have guns, and there's lasers, and I love it. I really love it. But I've mastered it more in terms of I know where to get to, and I position myself for the 15-minute game, and all the kids and adults are running around, and I'm in a powerful position, and I get the highest points, and I feel like I've won. But my seven-year-old goes, 'Dad, let's run around and find people and go where all the people are.' And I think, no, no, no, we'll lose if we do that. We'll get shot, and we'll not get the most points."

Mo: "Yeah, you know what, you will lose this game, but you wouldn't... you may not lose the next one, and the next one. And if someone else occupies that position that you take, you'll still win the game. Yeah, you understand, you're comfortable in your comfort zone of like, 'I've built that little thing. I know how to do this.' But you're not investing in you. Your life's purpose is you. Your only product at the end of your life is how far you've come."

James: "Yeah, that's powerful, and it makes me think... Tomorrow when I take them to laser strike, which we plan to do, I'm going to get out and get around and have fun with them, and run around and get shot up."

Mo: Yeah, and be shocked. So what?

James: That's a beautiful way. You honestly put that in a way I've never heard before, and I've asked every single guest that same question, Mo, and that's just incredible. So, thank you for taking me outside of my own thought process."

Mo: "Thank you so much for having me. This was a wonderful conversation. I hope, at least for me, I felt it was really connected and deep, and we got some very interesting points."

James: "Oh, it was phenomenal. Absolutely mind-blowing, from my perspective, to experience what you shared today. And I know the listeners are gonna love it. I'm going to make sure, for the listener or the viewer that's watching right now, we'll put links to your website, your Instagram, your books, particularly, uh, your most recent book, 'Scary Smart.' Um, is there anything else you would like anybody to do in terms of?"

Mo: "My podcast. My podcast is actually my biggest contribution to the mission. Believe it or not, so 'Slow Mo' is a very weird thing because I don't know how I choose my guests. It's quite interesting, and I really avoid celebrity, mostly, but it's just a beautiful way of taking an hour away from our busy life and just pondering the life of another. I mean, I get so many wonderful messages from people saying, 'Oh, I shared that with your guest this week,' and it's just... I open it for me. I love it so much. So, yeah, please send people to that as well."

James: "100%. I will, and Mo, thank you so much. I really appreciate you. I really loved it."

Mo: "James, thank you for having me. It was wonderful."

James: "Hey guys, if you enjoyed the content today, please smash that subscribe button below, and if you want to become part of my community, I've got an amazing free Facebook group. Please come and join us. The link is in the description below. And also, if you've got any questions about today's session, I'd love to know. Just comment below, and I'll be sure to get back to you. Guys, have the most amazing day."
This content is protected by Copyright under the Trademark Certificate. It may be partially quoted, provided that the source is cited, its link is given and the name and title of the editor/author (if any) is mentioned exactly the same. When these conditions are fulfilled, there is no need for additional permission. However, if the content is to be used entirely, it is absolutely necessary to obtain written permission from TASAM.

Areas

Continents ( 5 Fields )
Action
 Contents ( 472 ) Actiivities ( 219 )
Areas
TASAM Africa 0 149
TASAM Asia 0 236
TASAM Europe 0 44
TASAM Latin America & Carribea... 0 34
TASAM North America 0 9
Regions ( 4 Fields )
Action
 Contents ( 178 ) Actiivities ( 54 )
Areas
TASAM Balkans 0 93
TASAM Middle East 0 62
TASAM Black Sea and Caucasus 0 16
TASAM Mediterranean 0 7
Identity Fields ( 2 Fields )
Action
 Contents ( 176 ) Actiivities ( 75 )
Areas
TASAM Islamic World 0 147
TASAM Turkic World 0 29
TASAM Türkiye ( 1 Fields )
Action
 Contents ( 229 ) Actiivities ( 60 )
Areas
TASAM Türkiye 0 229

Mohammad GAWDAT; We are so brainwashed by productivity that it is impossible to stop AI. You would look back in 2027 and say shut why didn't we listen. I don't mean to scare people a lot by the way. I just want people to wake up. I don't know how much louder I can scream?;

I want to emphasize how crucial security is for a country. Kobil has over 100 million users and collaborates with companies worldwide. Achieving such success as Turkey is also a source of pride. We hope to see more of such companies globally because I believe it is necessary.;

We are very glad to be together at the 9th Istanbul Security Conference and its co-events, which we have continued without any disruption until today. We extend our gratitude to all our guests, both domestic and international.;

In order for Türkiye - Yemen relations to be carried to an ideal point in today’s multi-dimensional world order; every parameter should be taken into consideration. The aims of the Program are to prepare a civil, institutional and intellectual strategic base to improve and strengthened the Türkiye -...;

After the end of the Cold War, relations between the People's Republic of China (PRC) and the Russian Federation (RF) have developed dynamically, albeit seemingly unevenly. The possibility of a genuine strategic partnership between Russia and China seemed to be downplayed.;

Wars have been going on since the first humanity and gained new dimensions in the last century fed by different objects. As the aerospace and space race increased in the bipolar world, whose tensions increased during the Cold War period, it has become clear that the unpredictable works on rocket tec...;

From the understanding of security in the past, which focused only on military threats from the Cold War period, which prioritized the security of states; in addition to the state actor, there is a transformation towards a new security understanding that deepens and expands in the political, economi...;

It is seen that the economic center of gravity of the world has shifted towards Asia since the beginning of the 2000s, when China entered the World Trade Organization. We observe that the Asia Pacific region showed a faster recovery after the Kovid-19 crisis compared to other regions and accelerated...;

4th Marine and Maritıme Security Forum 2022

  • 03 Nov 2022 - 03 Nov 2022
  • Ramada Hotel & Suites by Wyndham İstanbul Merter -
  • İstanbul - Türkiye

8th Istanbul Security Conference (2022)

  • 03 Nov 2022 - 04 Nov 2022
  • Ramada Hotel & Suites by Wyndham İstanbul Merter -
  • İstanbul - Turkey