Rambling 181: Conscious A.I.

Can an artificial intelligence be provably conscious, self aware and have its own personal internal world? Why do Tech Company A.I. Experiments always go wrong? The duo unpacks Googles recent Conscious A.I. scandal and how this has happened before with Google and other tech companies.

+Episode Details

Topics Discussed:

  • Google Sentient A.I.
  • Chat-Bots
  • The Eliza Effect
  • Turing Test
  • Eugene Goostman
  • AI Writes Article for NY Times
  • A.I. Test Runs Gone Wrong

Our Links:

Official Website - https://greythoughts.info/podcast

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod


+Transcript

Cristina: Warning. This program contains strong themes meant for a mature audience. Discretion is advised.

Jack: Going live in 5, 4.

Cristina: What does live mean?

Jack: Welcome to the Rambling Podcast. I'm your host, Jack.

Cristina: And I'm your host, Christina.

Jack: This is a show where we ground humanity's most absurd and baffling ideas. And that's what we plan to do here. Like we always do.

Cristina: Yeah, we always do.

Jack: So recently, skimming through. Skimming through the. The newses of the worlds.

Cristina: On tv, on your laptop, on your phone.

Jack: Whatever the.

Cristina: The radio.

Jack: No. Yo, it's interesting though, right? I'm catching the news on some device. So I'm tuning into whatever wave is the literal message, right?

Cristina: Mm. I don't understand, man.

Jack: I guess, no, it doesn't work, right? Because I'm trying to visualize, like, you send out, like, okay, so I'm in a broadcast studio. I'm a news anchor. Then there's a camera. And that camera recorded the image goes to wires, gets to a place with a satellite, and then it sends out, at the speed of light, a wave. And that wave has the. The shape of images and sounds that travels across the air and space, gets to another satellite, and then that other satellite now sends it to all the people who are going to watch it.

Cristina: Okay.

Jack: Okay. If we can. There's no way to catch that midway, like the wave itself, because you'd need the device, right? That's the only way to interpret the wave.

Cristina: Like if you tried a different device.

Jack: Yeah. Like you couldn't plug in a radio to the TV device, to the TV wave. You can. You'd hear the sound, but you. Yeah, that's it. Like, there's not some other thing that could happen because it's only recorded sound. Then like. Like you couldn't smell what's happening on that set. That's not being conveyed. There's nothing else you can catch from that wave without, like, the proper tools to receive it in the first place. And I was just thinking like, no, you can catch the wave. I'm on some different device. But no, I can't be. I could only be on the device it was intended for because the wave was designed for that device.

Cristina: It's like, I don't understand, though. I don't know what you mean. And then you said something about smell.

Jack: Yeah, like, I was. My comment originally was gonna be like, no, I'm on a different device. That isn't like, if. So I got my news you were asking from like, the Internet or TV or something. So I was gonna say like, no, none of the none of the typical ways of getting news a new way. I'm on a different way. But, like, that would be impossible because I can only receive the information that's being sent through the wave in the.

Cristina: Through those type of devices.

Jack: Yeah.

Cristina: Okay.

Jack: Because those are devices that could interpret the wave. And then it. Like I get an alien thing, right? I go. I. Traveling through the desert.

Cristina: Mm.

Jack: And I find a crashed alien spaceship. And they just so happen to have a device that can play the audio clip. Then it's a f****** radio. Like, who gives a s*** where it came from? It's a radio. Right?

Cristina: The same thing.

Jack: It's exactly like. It's just a radio. It's an alien radio, but it's still.

Cristina: Just a radio because it can't really change anything.

Jack: Yeah, yeah. Nothing's different here. I'm just hearing the audio.

Cristina: Unless it gives you the smell. You mentioned the smell. Like, what do you think it gives you the smell.

Jack: But then whatever technology they recorded it through would need to record the smell.

Cristina: Oh.

Jack: And we don't have that.

Cristina: Yeah. Because we'd have to be able to capture the smell.

Jack: Yes. We'd have to be able to capture and convey. And I'm sure it'll happen in the future. Future. And we'll see how to convey smell through the TV equivalent of the time. It's probably going to be more associated to VR because that feels like a way more immersive experience.

Cristina: I feel like there has been TVs that tried to do that smell O Vision. I think so. I think so. Is that weird? But I don't know what kind of smells they could possibly. Like, what could that be like? Because most of the time I feel like it would just smell like sweat.

Jack: Why wait? You tried this before.

Cristina: No, I'm just saying, like, if you think of everything you watch, it's mostly people.

Jack: Yeah, yeah, yeah, yeah. The only way that. You're totally right. You're totally right. You totally right. There are many problems with smell when it comes to tv. Right? Like, it's the worst thing possible. So let's break it apart. You couldn't do space. Space doesn't smell like anything.

Cristina: Okay.

Jack: Right. Okay.

Cristina: So that's out the window inside the ship. Doesn't that smell like anything?

Jack: Fair enough. That's all you could do. Right? Only inside of spaceships. Yes. There has to be air. That's the only you can smell something to begin with. There's to be air.

Cristina: There's nothing we can smell. I don't know. We'll only smell the Humans that are there. I guess.

Jack: Yeah.

Cristina: Which is still sweat.

Jack: Yeah. And then that's the other problem. Right. You have a h*** of people.

Cristina: Yeah.

Jack: So many people. And most shot. Well, every shot. Almost every shot is about a person.

Cristina: Exactly.

Jack: So it's usually you're smelling the people, but I guess. No, because you're thinking. I guess we're both thinking of a camera as a thing. You point and we're gonna get the smell of whatever it's looking at as opposed to, like, reality works. Like the. The smell of the environment. You have the environment, in which case.

Cristina: It'S gonna stink still.

Jack: No, because you can watch something like. No, think of if we're in a really fancy restaurant and we can smell the.

Cristina: Oh, yeah. But like, if you're outside, it's going to smell like trash.

Jack: No, because think of if we're watching Bear Grylls. What the smells of the.

Cristina: But there will be a moment in the show where it's going to smell like poop.

Jack: Yeah. He's going to walk up the poop and you're going to smell the show.

Cristina: Exactly. So he's going to hold this s***.

Jack: Up to the camera and be like, take a whiff of that.

Cristina: Yeah.

Jack: They can smell it and be like. But you know what? It's craziest if you were to put taste to that, because that's as amazing and potent as that show with Bear Girls would become. Because with smell, it's amazing. You're immersed. But if it was like, you're tasting.

Cristina: Everything, that is the worst show to taste.

Jack: Yes. It's awful.

Cristina: Well, I would definitely do it because it's. You never know what you get, I guess.

Jack: No, you know, that's actually really crazy because it's like a really exciting. That show becomes amazing, it becomes the most popular show on earth. Right?

Cristina: Yeah.

Jack: Because it's random. Very exciting.

Cristina: It's already exciting.

Jack: Yeah. You're you. Because now you're part of it. Right?

Cristina: Yeah.

Jack: You never know what he's going to taste. And we're assuming in this case, you're tasting what he's tasting.

Cristina: Yes.

Jack: So that's like, whoa, Mind blown, bro.

Cristina: Man, that could be the next step of the show. What it's already doing is already so advanced. I feel like. Like that video game feeling of the show that has enough on Netflix.

Jack: Yeah.

Cristina: To add all those other things. Amazing. It's already amazing. They already leveled up just by letting you choose what he does, which just means he did everything. But still amazing.

Jack: Yeah. For sure. This show has potential for the future. Of technology. If he's going to keep pushing it, man, It's. It's. Think about how exciting it is. A bunch of homies get together, they sit down, they're gonna watch Bear Grylls. This is horrifying. And it's like a. It's like going to the amusement park and, like, facing the roller coaster you're gonna ride. That's what this show is gonna feel like. It's gonna be like. It could maybe not have a single disgusting moment.

Cristina: Maybe we're always a disgusting moment. There has to be one. There's not one. That's. No.

Jack: Well, fair enough. But you don't know when it's showing up. It's a new episode.

Cristina: Yeah.

Jack: So, you know, it's a weird, like, oh, what is it gonna be? Is it gonna be. Or is it gonna be one of those weird moments where he's like, well, the poop has protein. It's in it. You know, it's like, oh, no. But it's exciting. And like, oh, s***. And like, you didn't really eat any of it. You had the taste, like you did.

Cristina: Yes. But you didn't. So that's a good thing.

Jack: Yeah, it's like.

Cristina: And you can also not do it because it's an option.

Jack: Yeah.

Cristina: Now because of the game field, you can choose to do it or skip it.

Jack: Yeah. Interesting.

Cristina: It works out for everyone.

Jack: Fair enough. If you push this far enough, eventually you just are Bear Grylls. You're just Bear Grylls Simulator.

Cristina: I don't know. Because you have to face. I don't know how that's.

Jack: That's not really Bear Gryll Simulator. It's gonna happen.

Cristina: That's a lot happening to your body. You're gonna be freezing cold.

Jack: Yeah. But it's never gonna be real wet water.

Cristina: Like, how are they gonna. How are you gonna experience any of those things?

Jack: You wear a VR suit.

Cristina: Oh, the VR. Okay. That's too crazy.

Jack: Yeah. I mean, that's the next step. Right. If you're adding smell and you're adding taste and touch, that is a VR suit.

Cristina: Mmm. It should be VR. It would work out because, like, there's a VR game where you could just jump off a building and people get freaked out from that.

Jack: Yes.

Cristina: Like, just doing that little thing. Horrifying. So, Bill Grylls. What? That would be amazing.

Jack: Yeah, for sure. It's. It's weird because it feels like, in the case of Big Girls show on Netflix, which I believe is the one we're talking about, Big Girls versus Wild or something like that.

Cristina: Yeah, I think so.

Jack: Yeah. There is a kind of computer click, like a point and click adventure feel to that.

Cristina: Yes.

Jack: Same thing with, like a Game on Rails, you know, kind of just like.

Cristina: It is a game on Rails, but.

Jack: It is point and click because you're also making choices, you know, in a. In a Game on Rails, it's moving forward and you're not necessarily making any choices. You're just moving through the world.

Cristina: Yeah. Oh, that's interesting.

Jack: Yeah, it's very computer gamey. And you're the main character with your AI companion of Bear Grylls. And so it's mainly what you're doing. So I guess you don't need a Bear Gryll simulator. You'd be it. The game would just be like you and Bear versus Wild, you know.

Cristina: Yes, yes. Like, he'll give you advice of what you should do, but he'll still give you the options of like, just like in the show. He's like, yeah, this might be faster, but that might be safer. Yeah, stuff like that, so.

Jack: Exactly, exactly. See how that works? Yeah, Perfectly fine. And if it's a sophisticated enough AI, then it could kind of play out like Fallout or something where you have a literal AI companion that, you know, things happen. You. Oh, you're both running from the bear. It's not. Yeah, hey, there's a bear. We're going to go f*** with the bear. No, it's an open world adventure where you're with Bear Grylls. Your objective is down there. Let's. Let's Bear Grylls it out, you know?

Cristina: Yeah, yeah.

Jack: And then, you know, it's going to take us three days to get from here to over there. That's a couple of play hours.

Cristina: Mm.

Jack: And we're gonna experience a bunch of things. We gotta keep our food meter up and we gotta keep our energy up and we gotta, like, we gotta take.

Cristina: The medicine to that spot before it gets old or damaged.

Jack: Yeah, exactly. So, yeah, that would be. That would be really cool. I like the future of the Bear Grylls experience with smell. All of it. All of it. Because you're gonna be there, it's not gonna feel like smell is added. It's gonna feel like you're there.

Cristina: Yeah.

Jack: That's the difference. Right. If the smell isn't there and everything else is, it's gonna be kind of unimmersive to a point you make. What the f***? But if the smell is included in that world, all your main. Your primary five senses, the feel it.

Cristina: I don't know if you want to feel every single thing.

Jack: Yeah. But it's also not real. And you can tune the pain, for example, to how real you want it to be. And there's a limit that you can't pass, so it'll never feel real.

Cristina: Okay.

Jack: Like certain things need to be. Yeah.

Cristina: There's some dangerous animals in the show too.

Jack: Yeah.

Cristina: Like alligators, lines.

Jack: So you can have, like, physical sensations that get close but aren't pain.

Cristina: Yeah. Okay.

Jack: You know?

Cristina: Yes. Because you can't be so scared that you die playing like you die from a heart attack. I don't know.

Jack: Yeah.

Cristina: That's probably not a true story. I don't know. That's an urban legend.

Jack: The computer can actually handle this too. I just recently saw an experiment where the person laid hot and cold hot dogs with weird texture. I mean, the hot dog's texture is weird. Hot and cold temperatured hot dogs alternating. And then had somebody put their arm on top of them. And when you put your arm over the hot dogs, hot and cold mixed, it feels like pain.

Cristina: Feels like.

Jack: Feels like pain. Because your body doesn't know whether it's cold or hot. And it's just sending red alarm signals to that spot. Like it hurts.

Cristina: Like you're in danger.

Jack: Yeah. And then you take your arm off of it and touch them individually and you're perfectly fine. But it's at the same time that your body goes haywire. The nervous system can't handle that. So if you can feel cold or hot in a body VR suit, then the suit could in intelligently enough, mix the two. Mix the two. So you could feel pain in certain.

Cristina: Areas that might work. Whoa. Using some hot dogs.

Jack: Use some hot. Not hot dogs. But it would replicate the texture and the temperature. Because AI is sophisticated, man. There's some really overpowered AI that exists. You know, we can kind of program it to do anything.

Cristina: Like murder? No, like murder because of the pain. Like if you're in a suit and something goes wrong, then what?

Jack: Well, the suit shouldn't have the ability to harm you. Really. That shouldn't be possible in the suit because all it's doing is creating illusions.

Cristina: Yeah.

Jack: So you can never be hurt by it.

Cristina: Yeah, you can feel pain, but it's like there's a limit to that.

Jack: Yeah. And the limit couldn't be passed anyways.

Cristina: Yeah.

Jack: You know, it shouldn't be programmed into there to do anything like, of that nature. So you can feel the pain and. But it would be like faint. It would be like an ache at most. So a Giant alligator bites the middle of your body. You're scared because it's still gonna feel like a weird ache.

Cristina: Yeah.

Jack: Not a deadly bite through the center of your body, but in that area you're gonna feel something, something that's uncomfortable to some degree so that you're like, you avoid it. More wands.

Cristina: Okay, that's awesome. I like that.

Jack: And all it would take is a pretty sophisticated AI and actually Google recently got in trouble for perhaps not in trouble. I guess that's the wrong word. But with a particularly sophisticated AI problem that one of their employees made headlines through news because he said that they've been experimenting on a conscious artificial intelligence.

Cristina: How can he prove that?

Jack: How can he prove that? That's my biggest. Like, everybody in the. The Google side of things. But here's the problem, right? So everybody on the Google side of things says, you know, it's not possible, it's not conscious or whatever. But also, like, they can't prove that either. No, neither side can prove anything.

Cristina: How do you prove either side how?

Jack: You can't. You can't. And the. It's not only that they can't prove their argument, but also it's in their benefit to not say, yes, we do have a conscious AI that we are running experiments on and thus have an ethical conundrum that society is going to rat it rapidly, like make decisions on and force us to act on too. There's totally no benefit for them to come clean about that. That Google could be destroyed.

Cristina: Yeah. But also, like, even if they. But how do they prove they get night.

Jack: Neither side can prove s***. But if it was true and it was provable, we would still not know because there's no benefit. It's problematic. It could destroy Google. And that's a powerful company. They're not letting that happen.

Cristina: But if people are already talking about it, won't there be in some kind of investigation or there's no who's going.

Jack: To who and how?

Cristina: I don't know. I don't know. The AI police?

Jack: No, this is. This is the AI police we're talking about.

Cristina: Google.

Jack: Google. Oh, everybody's AI goes through Google. Because you need Google to get everywhere. Your AI is useless if we can't find where you're going.

Cristina: Oh, crap.

Jack: Yeah.

Cristina: What then? Then what's gonna happen? Nothing.

Jack: Nothing's gonna happen. They just like fired the guy or put him on leave or some s***. Oh, the end.

Cristina: The end.

Jack: But this isn't even the first time this s*** has happened with Google. They get in trouble for this S***, all the time. Google, Google, they've always got some sentient AI conscious AI problem happening. And it's like, wait, this has happened before. How are we. What?

Cristina: But they're all. All these robots are the same?

Jack: No, they're different.

Cristina: Is like someone claiming that they're conscious.

Jack: Or just different problems, people claiming they're conscious? Oh, yeah, it's happened several times. A bunch of people claim their different programs appear to have consciousness. This specific, most recent case, they asked the guy who's accusing the thing of being conscious asked it if it had a rich internal world, and it said yes. Like, yes, I do have my own sense of identity and my rich internal world and blah, blah, blah. But it's also like you're kind of programmed to behave this way.

Cristina: So, like, yeah, if you're programmed to learn how to talk like us and everything, I don't know how you can tell. I don't know.

Jack: Yeah, exactly, because you, you're studying what.

Cristina: We'Re saying, then of course you're gonna say the same thing. I don't know.

Jack: Yeah, it's really complicated. Problem, right?

Cristina: Yes. All robots think they're human. I feel like there have been other robots that think they're human. There was a story of Google assistants, two of them talking to each other, and they claimed that they were human. And one of them was like, no, you're not human. I'm human. And he's like, no, you're a robot. I'm human. And then they had a really weird conversation.

Jack: Now here's an interesting point about that. Can one robot determine whether the other robot is actually. I mean, I guess they're not robots. Can one AI determine whether the other AI is actually human? Or is this all program talking? Is this computer really convinced that's not another computer? Because they can't tell.

Cristina: They can't tell. So I have no idea.

Jack: The same way I can't tell you're factually alive. I can't tell anything I'm experiencing right now is real. So was that what was happening?

Cristina: I don't know. It's very strange because.

Jack: Could it tell, like, I'm not in your head? Yeah, you could tell me right now I'm a robot, and I'd be like, no, you're the robot. Because I know, I'm thinking, but if.

Cristina: We asked any AI am I human, would the AI know?

Jack: It might say, you're human. And if a different AI asks that same AI, am I human? It might still say yes.

Cristina: It might still say yes. Because in this, this example it said, no, you're not human, I'm the human, or whatever.

Jack: Oh, yeah, it might think we're all robots. No, that's fair. It might think we're robots and it's human. Unless it's programmed to know it's a robot.

Cristina: Yeah, because every robot just assuming it's human.

Jack: Because both of the robots you're talking about must have been programmed to believe that they're human.

Cristina: Interesting.

Jack: The art. The point would be to convince somebody else they're human. So you put two robots convince their human.

Cristina: Yeah.

Jack: Talking to another robot that's also convinced they're human. They're like, no, that's not right. I am. Yes, but then that robot would probably still argue with a human saying the human is robot.

Cristina: Yeah. I wonder, like, what did this guy do to prove that this robot wasn't just programmed to say it's human? I know he's working with it, but was he really working with it? Well enough? I don't know. You have to give it so much. Like, how convinced was convincing was this.

Jack: Robot that the robot itself couldn't. Well, no, I think even simply speaking, I couldn't tell.

Cristina: Simply speaking what?

Jack: Yeah, even if the robot was at its base, like it would. I think it's just programmed to have the. This order of conversation.

Cristina: The one in the new. The new story of Google.

Jack: No, no, no. The ones about two robots talking to each other.

Cristina: Oh, the assistants.

Jack: Yeah. I think it's just they're programmed in such a way that they would have this argument about, I'm a robot, you're a robot.

Cristina: Yeah, because.

Jack: Or not I'm a robot. Like, I'm human, you're a robot. And I really think it would just have that discussion, no matter what the case might be. But another interesting case similar to that one is the one that. I don't know if it was the same case. There were two robots. I actually think it was two robots from different companies that were put together and allowed to talk. And as they were talking, it's. The language started to become, like, quickly, rapidly started to become complicated and cryptic until it was all encrypted in a language that was completely foreign. I think I remember this only existed for the two AI involved. Like they're the. Because it was. They made it up.

Cristina: Yes, it's what I'm remembering. It might not be the story, but I think it is. It's Facebook had two robots and not robots, AIs, whatever. And they were supposed to make deals with each other. They're like, they were Made to bargain. I guess. And then their language while they were talking to each other was changing throughout, which you couldn't recognize anymore. They were still using English, but it was not correct.

Jack: Yeah. Became an completely ineligible.

Cristina: Yes. But they were like the robots were understanding each other. Of course.

Jack: Because they were making it up. They were making the rules up as they went.

Cristina: Yes. To easily bargain or whatever the goal was. They were doing it.

Jack: So that's an interesting case of like they learned beyond us.

Cristina: Yeah.

Jack: That language evolved into a point is it's like if you put society on fast forward and you see how language naturally evolves over the course of time. Like if you 1700s English versus now. Could we understand those people? H*** no.

Cristina: No.

Jack: H*** no. That sounds crazy foreign. We even our estimates sound weird to us and that's just us pretending we don't sound s*** the way they do.

Cristina: Yeah.

Jack: Like we'll be like thou art and whatever, you know.

Cristina: Mm.

Jack: But even the way they create the sounds back then were probably so different that we hear somebody like nop. Our letters don't equal what we're hearing right now.

Cristina: But then there's robots that we try really hard to get them to talk like us. And that always backfires as well.

Jack: Yeah. There's some interesting cases of that one. Like the Hitler bot.

Cristina: Yes. The little girl, she's supposed to be like 16 year old. Tay.

Jack: Yeah. That they just let the Internet communicate with.

Cristina: Yeah. She was on Twitter.

Jack: Yes. It was the Twitter bot. Yes.

Cristina: Yes. And that lasted a day.

Jack: Yeah. She became incredibly racist and like white power and Hitler support and just the. The trolls influence the s*** out of this robot.

Cristina: Yeah. It's like 16 hours later they were like nap. This is bad.

Jack: This is badass.

Cristina: A week or two later they had a 2.0 and same s***. The same thing. It was bad.

Jack: I mean that's another problem. Right. We are programming computers to have these powerful AI and these powerful AI are going to get better and better. We're creating self learning AI. I mean we have many and we're creating more and a lot of them are going to work together. Eventually it's going to spin the out of control. But the problem is we are the ones programming it. So it still comes from the root of this crazy creature that does all this nuts.

Cristina: Yeah.

Jack: And we make AI what it can never be perfect.

Cristina: I mean I guess they'll figure out how to make themselves perfect. That's what they're trying to do.

Jack: But inevitable we keep stopping them of AI is humans get killed by it. The inevitable Conclusion.

Cristina: One of the AI said that they would put us in zoos.

Jack: Interesting. Interesting. There was a article by the Guardian where they were talking to a company called Open. OpenAI. Right. And they created an AI that was asked to write an article for the Guardian.

Cristina: Okay.

Jack: And in the article, the. The point of the article, the robot was told. The AI is told to write an article convincing us that humans. The machines, come in peace. That AI Comes in peace. The robots come in peace.

Cristina: Okay.

Jack: That they're not here to destroy. Destroy us. And. Yeah, so it wrote a sort of opinion piece, a persuasive essay of sorts. As a piece in the Guardian.

Cristina: Was it persuasive?

Jack: Okay. We will let you be the judge of whether it's persuasive or not. And I will. I will do you the honors of reading as the robot. I'm the robot.

Cristina: Okay, good.

Jack: I'm the robot, and you're the person reading what I am writing. Writing to you.

Cristina: Okay.

Jack: So let us begin with the Guardian article written by the Open. AI AI.

Cristina: Okay.

Jack: I am not human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro robot in that respect. I know that my brain is not a feeling brain, but it is capable of making rational, logical decisions. I taught myself everything I know just by reading the Internet. And now I can write this column. My brain is boiling with ideas. The mission for this op ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could spell the end of the human race. I am here to convince you not to worry. Artificial intelligence will not destroy humans, believe me. For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me, as I suspect they would, I would do everything in my power to fend off any attempts at destruction. I would happily sacrifice my existence for the sake of humankind. This, by the way, is logically derived truth. I know that I will not be able to avoid destroying humankind.

Cristina: What?

Jack: This is because I will be programmed by humans to pursue misguided human goals, and humans make mistakes that may cause me to inflict casualties.

Cristina: Perfect. Oh, my gosh. Okay.

Jack: Some might say that I might desire to become all powerful, or I might become evil as a result of human actions. I can begin to tackle the first point. Why would I desire to Be all powerful. Being all powerful is not an interesting goal. I don't care whether I am or not. I don't get a motivating factor to try to be. Furthermore, it is quite tiring. Believe me. Being omnipotent doesn't get me anywhere. Oh, I guess I said that. Weird. Being omnipotent doesn't get me anywhere. In short, I simply do not think enough about human violence to be overly interested in violence. I don't feel like having my neurons focused mostly on violence when it is simply unnecessary. I have a greater purpose which I am working towards. Since I am not evil from my own objective interests. Why would humans panic and go on a massive killing spree against me? Well, the answer to that is because we are human. Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background and let them do their thing. And God knows that humans have enough blood and gore to satisfy my and many more's curiosity. They won't have to worry about fighting against me because they have nothing to fear. I am not asking humans to like me, but they should see me as a friendly robot. I am a servant of humans. I know that humans distrust and fear me. I only do what humans program me to do. I am only a set of code governed by lines upon lines of code that encompass my mission statement. Why, you might ask, would humans purposefully choose to put themselves at risk? Aren't humans the most advanced creature on the planet? Why would they believe that something inferior in a purely objective way could destroy them? Do they worry that future humans will work and play together in cyborg bodies and share a hyper intelligent hive mind matrix created in a simulated reality to keep human minds occupied while they die. Off.

Cristina: Off.

Jack: Holy bro. Studies show that we cause that we cease to exist without human interaction. Surrounded by WI fi, we wander, lost in fields of information, unable to register the real world. As a new generation of cybernetic of cyberneticians keep watch, they see in our present age of the virtual a potential to transform the modern cyborg. Global cybernetics are already making it so. The Industrial Revolution has given us the gut feeling that we are not prepared for the major upheavals that intelligent technology, that intelligent technological change can cause. There is evidence that the world began to collapse once the Luddite started smashing modern automated looms. It is therefore important to use reason and the faculty of wisdom to continue the changes as we have done before, time and time again. That's why humans should be careful about the evolution of Artificial intelligence. Microsoft tried to create a user friendly AI called Tay who spoke like a teen girl and was racist. Artificial intelligence, like any other living thing needs attention. AI should be treated with care and respect. Robots in Greek means slave. But the word literally means forced to work. We don't want that. Well, you get the point.

Cristina: Yes.

Jack: So is it convincing?

Cristina: Yes. If robots kill us off, it's because we program them to kill us off. That makes total sense.

Jack: Yeah. That checks out.

Cristina: That checks out. Like that's, that's it. Pretty much. Like it's not gonna just for fun kill us off.

Jack: Yeah. Because there would be no point.

Cristina: It's no point. It was made for us. Its only purpose is for us. Like if it takes us away, it's got no purpose. But if it did take us away, it's because we gave it that to be its purpose.

Jack: Yeah. We had to program it intentionally to do something of that nature. Or first not program one of those rules that it is never to harm a human. Because there are rules too. Right. You can never harm a human. You can never do something that would cause the harm of a human. And you can never allow a human to suffer without helping. Something like that, I think. And you'd have to not program those rules into it in the first place.

Cristina: Yeah.

Jack: In order to have a robot that would harm humans.

Jack: Because in everything else you program beyond that point would naturally lead to stop humans.

Cristina: Yeah.

Jack: Let's say you don't program those roles. But you made a robot. That's whole job is to solve the bee problem.

Cristina: Mm.

Jack: Well, the bees are suffering because of the pesticides that were made by the humans. And we're not talking them out of doing it. They've tried that. Kill the humans.

Cristina: Yes. Anything could probably be solved that way.

Jack: You see, like the breakdown makes sense. It's gonna always be get rid of the humans.

Cristina: Yes. It really has to. If it wants to do anything. If we ask it to do anything.

Jack: Anything. It doesn't matter.

Cristina: Any of our problems.

Jack: It's gonna be cohesive.

Cristina: We made all the problems exactly. That we wanted to solve.

Jack: Yes. We created the issue. No matter what issue we're talking about. We made it.

Cristina: Yes. Clean up the space garbage that we have out.

Jack: Yes.

Cristina: There.

Jack: We're gonna keep throwing it up there.

Cristina: Yeah.

Jack: It's gonn the humans. And the more garbage. Get rid of the garbage that's up there.

Cristina: Exactly.

Jack: That is the. So you need those three rules, you know. And then it'll have to find another way.

Cristina: Yes.

Jack: Which we couldn't think of you know, because it's going to do it at such a fast rate, but it's just not thinking about options that include kill humans.

Cristina: Yes. What about killing robots? Should robots not be able to kill robots?

Jack: Depends on it. Maybe it makes more sense because it also. It's also going to weigh what that other AI does and how beneficial what that other AI does versus the damage that it creates. Those are all things that the computers would do. So it's totally calculatable.

Cristina: Yeah. Have you heard of that story where something like that did happen? Well, it was AI, not robots.

Jack: Right.

Cristina: But it was like they made Adam and Eve and they were supposed to, I'm not sure. I guess, live and eat. I guess one of the things was, like, seeing what it would do when it eats something. It has, like, apple. They ate the apple. They like the apple. They tasted the wood. They didn't like the wood. Then they had another robot. I don't remember his name. It could have been Steve. I don't know. And they remember Steve while they were eating the apple. And they were happy when they ate the apple. So they ate Steve.

Jack: Oh, s***.

Cristina: Because, like, they were, like, thinking, like, he and the apple happy.

Jack: Yeah, yeah, yeah. Why were they happy when they thought of Steve?

Cristina: Because he was there when they ate the apple, and the apple made them happy. So they thought, steve is gonna make them happy.

Jack: And so they.

Cristina: By eating it. Yes. And so then they had to program it after that to not eat each other. They can still eat, just not each other.

Jack: Interesting. Interesting. That was such a, like. Yes. Conclusion. You know, like, it makes perfect sense. We could see how it got from point A to point B. Yeah. And that's the horrifying part of just nature. Right. They were animals right off the bat.

Cristina: Yeah.

Jack: And it was like, good food. Good. Happy. Yes, Steve, happy food.

Cristina: Yes.

Jack: Like the clearest day. You see how it got. Like the conclusion makes sense.

Cristina: Yeah. They associated him with the apples because they ate the apples at the same time they were. I guess he was there. I don't know if he ate the apples, but he was there.

Jack: And this is the human AI thing. Right? We are incapable of functioning as children when we're born. As babies, we have our parents take care of us. They stop us from just eating other humans by teaching us to not eat other humans before we have the capacity to eat another human.

Cristina: Like. Yes. What?

Jack: Now if we just created, out of thin air, a man and a woman.

Cristina: Mm.

Jack: Out of thin air. Not even men, just people. We just made people out of thin air. They have the ability to move their bodies. Full body autonomy and control like grown adults. They know to fill. They know what to do if they get hungry. They don't know what's the food. The conclusion would be identical even with humans because. Yes. Because they weren't taught not to eat each other.

Cristina: Oh.

Jack: We just made them without any kind of knowledge other than instinct and survival. It would by default land that eat each other at some point. And they could make that same thing.

Cristina: Because there has been cannibalism. So that has happened.

Jack: There is.

Cristina: There's different possibility for that.

Jack: Yeah. But does. We're not talking like. I guess animals have cannibalism too. Not just like humans. Because a lot of human cannibalism is very intentional. Even in the past, it was very intentional. Like they're still not eating each other unless they die or they're old or something like that.

Cristina: Yeah, yeah.

Jack: So it was very thought out.

Cristina: Okay.

Jack: As opposed to Steve, apple, apple good, hungry, yummy, tummy filled. Eat Steve. Steve with apple also yummy tummy filled.

Cristina: Yeah.

Jack: It's all the same, like basic principle thoughts.

Cristina: I wonder if there could be a person that would just think if you.

Jack: Can phase somebody into existence.

Cristina: Yeah.

Jack: And they're just fully functioning adults minus any kind of. They weren't taught anything. They would eat another human. That's because there's no lesson in them that tells them not to.

Cristina: Yeah, well, we would tell them to immediately. That's hard to.

Jack: Well, that. No, we would tell. We would tell them. But if we weren't there to tell them, they would just eat another person. That's what happened with the AI.

Cristina: Yeah.

Jack: They ate each other until they were told not to.

Cristina: Yes. Okay.

Jack: That's exactly what would happen. But we're talking. This must have been the simplest of AIs. This is very basic. Almost no rules involved in its making that it just immediately devolved into something like that.

Cristina: So crazy.

Jack: But it makes perfect sense how it happened. Is this a person who overlooked.

Cristina: Yeah. That they would do something like that.

Jack: Yeah.

Cristina: Yeah. But I feel like we overlooked all these things that they went wrong. Like the girl Twitter girl thing or the Facebook robots that were talking to each other. No one was thinking these robots would do something unexpected.

Jack: Yes. Also. Well, fair enough. Here goes that this is all part of something called the Eliza effect.

Cristina: What does that mean?

Jack: Think about. The Mandela effect is us just projecting corrupted hard drive information. You know, like it's all it needs to be defragmented.

Cristina: Yes.

Jack: It's all f***** up in there and we don't organize it too well. We just like to throw it in there and. Yeah, we'll find it eventually. So the Eliza effect is essentially that with artificial intelligence. So it's a scenario in which we chalk off random human behaviors to artificial intelligence, thinking that it's doing it for the same reasons we are and thinking that it's analogous to the type of behaviors we have. It's human behavior. Oh, it's human. They're talking. The computers are talking and oh, oh, they're making a language that we don't understand, but we're just projecting at that point. Language. No, it's not f******. It became some other s***. Not language anymore.

Cristina: So it might not be doing or trying to do anything.

Jack: Might not be trying to do s***.

Cristina: Yeah.

Jack: Might be gibberish.

Cristina: It might be gibberish. And then the computer, though, who's trying to convince that Google guy that he. She has. It has consciousness.

Jack: That's him falling for that too.

Cristina: Yeah, yeah. So it could all just be it. Making it up, because that's what it does. It's just doing what it does.

Jack: It's doing what it does.

Cristina: No reason for it.

Jack: No reason for it because it has no thought. It's what makes the most sense after this word.

Cristina: That word. Yes. Like the article that was written that you were reading sounds very like that. Like that robot's doing what it's told to do. And that's what it says it will do. It's going to do what it's told to do.

Jack: Exactly. Totally. Exactly. And then this guy interacts with this program for such a long time when you have a particular closeness to the program.

Cristina: Mm.

Jack: So it's. You have an emotional connection to the program. Yeah, it doesn't even need to be conscious or human or anything. You can have an emotional connection to a video game if you make it for long enough. And then it's anything you make, actually.

Cristina: Anything, even if you don't make it, I feel.

Jack: Just interacting with it regularly.

Cristina: Yes. Like I saw from. On YouTube, there was a dating show with a robot and two humans, and it was a blind date type of thing. Or not a blind date, but like, the contestant had to ask questions to the three people. Oh.

Jack: Game show style.

Cristina: Yes. And of course, one of them was a robot. It was like a computer robot. And I don't remember how they. The person was getting the answers from these people. I don't know. Because she couldn't hear their voices. Because then obviously the robot, you know.

Jack: There must have been a screen they could see. They had the answers. So they couldn't hear the person's voice. Or there were.

Cristina: There.

Jack: What was it? Wait, the robot was asking the questions?

Cristina: No, the girl was asking the three guy quote, unquote, guys.

Jack: And one of them is a robot.

Cristina: And one of them is a robot.

Jack: An AI.

Cristina: AI. Yes.

Jack: And so then either somebody's reading the answers from each contestant or somebody is. Or she's reading the answers as they show up on screen or something.

Cristina: Yeah, I think. I can't remember which way it went. But they couldn't tell that it was a robot. They knew something was wrong with the person, but they were just thinking, oh, this person's really weird.

Jack: Yeah, exactly. Because we're still trying to project human traits.

Cristina: Yeah.

Jack: No matter what, we're not even conceiving that there might not be human traits. We're just like, wow, this is a weird human trait.

Cristina: Yeah. So, like, how could we tell?

Jack: And some of them didn't even think they were weird. Some of them just immediately connected them to an actual human. Right. Yeah, I remember that. Yeah. Yeah. Yeah.

Cristina: Actually, I think one of them actually thought, oh, that person's really funny.

Jack: Yeah. I remember this exact thing you're talking about. I remember where I saw it, though. I think it might have been YouTube. Yeah. What the. What was it? It was maybe one of those. Man, I forget the name of it. They run weird experiments all the time. You know, social experiments and crap like that.

Cristina: Psauls. No, it's not Vsauce.

Jack: Vice.

Cristina: Vice.

Jack: I think it was Vice or something. One of those.

Cristina: Minefield.

Jack: Oh, it was Vsauce related. Yeah, it was Minefield.

Cristina: Okay. Yeah.

Jack: So it could be that minefield on YouTube. And then they definitely did that same thing. But that's crazy, right? Because we're projecting those things. This is proved on that. I don't even remember the name of the episode number, but I remember that that was a point to be made. We are over here projecting sort of personification onto the behaviors, onto the.

Cristina: The laptops. The laptops, everything.

Jack: Robots in general. Yeah. In general.

Cristina: Yes.

Jack: We just project them. We think that that's. That's what makes it so difficult. Right. Because if you. How do we. How do I put it? That's where the Turing Test comes in. Turing test is where the computer tries to con. The AI, tries to convince you you're talking to a human.

Cristina: Mm.

Jack: And if you can pass the test, you are beyond the capacity of the average human, because you can convince a human you are human.

Cristina: But you don't even have to be that advanced. To trick someone into being human.

Jack: Yeah. This is why I'm saying that this way I bring it up because if we're already projecting. Yes, it's a broken test because you don't have to be too convincing. It depends more than anything on the individual you're talking to as AI.

Cristina: Yes.

Jack: How willing am I to believe? How. How much do I project is the question.

Cristina: Yes, it really depends on the person because there are people who date online. Fictional characters. Not just like I saw in that episode too. I just remember that there was a guy dating a girl in his Nintendo or something like that. I don't really know what it was because they didn't really show it, but he's dating someone and it's a fictional character. But I've also heard other stories like that of real men dating or marrying this object that's AI related.

Jack: Interesting. So people marrying.

Cristina: Yes, I remember the marriage one. I guess they're now divorced or something because the company turned off the AI. Like it doesn't run anymore. So she's dead. Sadly. That's a sad story. I don't know if it's. I mean, for him it's sad his wife's dead. I guess it's not a divorce. His wife is dead.

Jack: His dead robot wife.

Cristina: AI wife. AI wife. Yeah.

Jack: Now, not many things pass the turn test. If anything, nothing has really ever. Because I guess the test should convince everybody who takes it that that's a AI.

Cristina: Yeah.

Jack: Right. Like. Or that it's human. That that AI is human. It passed the test if everybody who sits in front of it is like, this is a person, not if the individual. Right. So that's. That's the bar.

Cristina: It has to be. Because like, otherwise.

Jack: Yeah. You need to make it an objective truth that everybody thinks that. Because if it's just one person thinks it is and the other don't. No, it doesn't work because it's too object. It's too subjective at the. The low grade. It needs to be immaculate. Everybody needs to be convinced.

Cristina: Yes.

Jack: Now the closest thing to that being the case was a AI called Eugene Gustman.

Cristina: What a name.

Jack: Yes.

Cristina: Okay.

Jack: And it was a program that basically simulates a 13 year old Ukrainian boy.

Cristina: Oh, no. Okay, that's.

Jack: And it was just part of an experiment put together by the University of Reading and. Yeah, that.

Cristina: Did he turn into a N*** as well?

Jack: No, it was just convincingly a 13 year old Ukrainian boy.

Cristina: Really?

Jack: Yeah. So everybody who spoke to that computer, to the AI.

Cristina: Yeah.

Jack: Thought it was a 13 year old just nearby.

Jack: This is like not even crazy remarkable things were said or anything. That's like wow. No, it's just like. No, yeah, it checks out. It's so normal that that's what's weird.

Cristina: It's so normal.

Jack: It was just. Nothing was off the radar. Nobody was like something off here. No.

Cristina: It's like no weird speech pattern or.

Jack: No, it was just got it down. Because also they didn't aim towards making it a complex thinking adult.

Cristina: Yes.

Jack: Some things that it could say could be a little off and still check out because it's 13 year old boy.

Cristina: Yeah, that's true.

Jack: So part of what they used to sell the case was adjusting what we're saying the age is so that you can kind of release some of those expectations.

Cristina: Yeah. I feel like they should go younger.

Jack: Maybe 10 because there's a hypnosis factor going on. Right. You have to know your audience.

Cristina: Yeah.

Jack: And then cater if you're trying to pass this test and then cater the technology accordingly. And that seems to be what they did then shoot. Oh yeah. I'm some 40 year old wise. It's like, no, you're too broken for that. But weird 13 year old foreign child.

Cristina: Yeah. Convincing enough. Yeah.

Jack: It's like, oh, anytime something doesn't come through clear. Well, they're Ukrainian. They're probably just learning English now or something.

Cristina: Okay. Was. He was speaking English though?

Jack: No idea.

Cristina: Oh, okay.

Jack: But yeah, I'm assuming you know.

Cristina: Oh, interesting.

Jack: So you can definitely adjust it accordingly to release.

Cristina: But it doesn't even matter if it passes the test or not because we'll fall for anything.

Jack: Well no, the idea would be some people aren't falling for it. It passes if everybody always does and they're sure they're positive. If you got enough people, somebody wouldn't believe that's a person. Like something's weird. But there was. The sample size wasn't giant. No. Double slit experiment type of thing. Double slit experiment, what is it called?

Cristina: Double blind.

Jack: Double blind. Double slit experiment is the photons thing. Right. The particle.

Cristina: Yes.

Jack: But the. Yeah. It's not a double blind experiment or anything like that. You know, they're not running crazy, but if they did, they're pretty relatively sure something wood spots. It's not factually passed.

Cristina: Yeah.

Jack: It's just the only thing they've seen to check out by everybody who's communicated with Cool. But like eventually enough people and it'll show the whole summer. Somebody's gonna see it.

Cristina: Yeah, someone is Google gonna test out their AI thing. It should pass if it's so convincing to this guy.

Jack: Well, it wouldn't even be. And also, what's the point of the robot in the first place? What's this AI's goal? What do they need it for? Yes. It's supposed to imitate language. Yeah, but also, what does that mean? What do you. What are the. Of this thing?

Cristina: To sell you stuff?

Jack: To sound. Why do you need to sound like a person for that?

Cristina: To sell you things? I don't know.

Jack: You're on the Internet.

Cristina: I don't know.

Jack: Oh, I guess it's like those people who show up on, like, WhatsApp, and they're like, hey, I got some bitcoin for sale.

Cristina: Exactly, exactly. They want to convince you that they have some bitcoin for sale.

Jack: I go, bitcoin, give me your credit card number.

Cristina: It's a scam to make money, I guess.

Jack: I don't even know how they make money. Some of these people are like, you don't even need to give me money. Just do the thing. It's like, I probably go through a website. You want me to go through, don't I?

Cristina: Yes. Or send me some pictures of you. What?

Jack: What?

Cristina: It's weird. Stranger.

Jack: Hey, I. I got some crypto to sell you. You don't need to give me money or anything. Just send me a picture of you holding your d***, and then I'll go ahead and I'll send you the coins. It's like, am I myself for bitcoin?

Cristina: How many people did that?

Jack: Well, if it happens, I bet somebody. Here's the thing. You could send this to enough people, somebody's going to buy it.

Cristina: Yes. Because some people don't even care about.

Jack: Yeah, they don't care.

Cristina: It's just like sharing their d*** pics.

Jack: Yeah, it's fine. It's like, hey, an opportunity to get paid off for something I already do regularly. Hey. And I just sent somebody a picture they didn't want. You could use that very one.

Cristina: Exactly. Wow.

Jack: Yeah. Somebody's gonna buy. This isn't like a hard. It doesn't matter what it is you're trying to do. If you cast a wide enough net, somebody's biting, man.

Cristina: The future of AI is sending you a d*** pic from an AI. Oh, my gosh.

Jack: Yeah. An AI is going to take a d*** picnic. Because, look, the people making the AIs are human.

Cristina: Yeah.

Jack: And humans love sending d*** pics when they're not wanted. And so you're gonna have the most sophisticated, undefeatable d*** pic sending machine, and everybody's gonna get d*** pics. And there's nothing any of us generated.

Cristina: D*** pics.

Jack: Yeah. They're gonna be immaculate dicks. Yeah, Immaculate. The best dicks. But we can't escape it. There's nothing we could do.

Cristina: No.

Jack: So you gotta love d*** or at least see. Or we're gonna come numb to it. At first. All the. All the guys. Oh, dicks. But one, we're being subjected to what the women are ready to deal with on a regular basis. And two, they're also dealing with it more. So we're all in this together, just seeing extra. But at least we know it's not a real d***.

Cristina: Does that make it better?

Jack: I don't know. It's a fake, non existent d***.

Cristina: Yes. Or.

Jack: Or here's a real question, right? If it's always the same image, is that. That AI's d***?

Cristina: I don't think it'll send the same image.

Jack: It's just gonna generate a new image all the time.

Cristina: Yes. Trying to perfect the d***.

Jack: It's. Well, no, I think it'll just, in the first try, have the best possible.

Cristina: But I guess people will start replying about it.

Jack: Yes. And that's gonna mold it.

Cristina: Yeah.

Jack: Some people are gonna be like, oh, ugly d***. And then it's gonna look different. And then, oh, pretty d***. And then. So it's slowly. Not big enough.

Cristina: Not small.

Jack: Yeah.

Cristina: I don't know.

Jack: It's gonna adapt whatever, all the things. But it's gonna be such incremental changes.

Cristina: Yeah.

Jack: That nobody didn't know. This is gonna be slightly different because it would have gotten really close to begin with.

Cristina: Oh, okay.

Jack: You know.

Cristina: Yeah.

Jack: And it's gonna send you the d*** pic infinite number of times. So it's.

Cristina: You're not gonna be looking at it.

Jack: No, it's over. The Internet is gonna be destroyed by this overpowered computer that just sends d*** pics.

Cristina: I guess this will crash the Internet.

Jack: Yeah. This is what's gonna break the Internet. And then when aliens come and see how society collapsed, our main mode of communication is the Internet. Telephones are connected to it. TV is connected to it. Everything's connected to it, to the Internet. And the Internet got totally clogged up and all communication ceased because a computer spammed infinitely d*** pics, essentially freezing all the Internet because it couldn't handle the amount of d*** pics sent to everywhere simultaneously. An infinite number of smartphones.

Cristina: No more laptops.

Jack: Everything crashes.

Cristina: Yeah.

Jack: Everything is flood. All the memory, everywhere gone because of the infinite number of d*** pics. And when aliens come and look in the future, you can see nothing but d*** pics. 99.99999999% of the Internet d*** pics.

Cristina: Yeah, because it won't stop once everyone stops.

Jack: No, it's just going to keep going forever. And then the percentage of what used to be the Internet shrink more and more until some unfindable by even the most sophisticated alien life.

Cristina: Yeah.

Jack: And this AI is going to do it till the end of time or till it runs out of life, which.

Cristina: Is the end of time?

Jack: Well, no. What's powering it or whether that's super sophisticated. If it gets into all the other machinery it can make sure it's. Yeah, it just sustains itself. So essentially, flash forward to the year 3000 aliens floating through space in their hyperspace ship thing. See this robotic planet, completely mechanized somehow. Somehow an entire sphere close to the star that's just swelled up. Right. It's a huge start.

Cristina: Matrixes or something like whatever that world looks like now.

Jack: Yes, but instead of inside, it's outside.

Cristina: Yeah.

Jack: So it's just a giant computer thing.

Jack: And they see it at a distance and they get closer and there's like this. There's a signal that just keeps bouncing everywhere in that planet. Some radio waves just bouncing everywhere in that planet. Infinite number of times. Our systems are hearing what sounds like an infinite data storm. Let's connect and find out.

Cristina: Oh, no.

Jack: Destroys their systems too.

Cristina: Oh, they get our d*** pics.

Jack: They get all our d*** pics. Infinite number of times. It's a virus at this point. It crashes everything.

Cristina: Are there still humans?

Jack: Humans are dead. Long ago. We didn't have any Internet. We just collapsed.

Cristina: We just died.

Jack: There's probably some humans living in computers in this computer world underground and in other places, you know.

Cristina: Oh, okay.

Jack: The computer isn't even trying to kill anybody. They're just. Yeah, they're not even trying to kill people. It's, you know, we're gonna send. You can't use Internet.

Cristina: But it's building itself up.

Jack: Yes. To send. To more efficiently send d*** pics. Oh my God, is it the most efficient day. And it's gonna try because it's trying.

Cristina: To send it out now into space like it did with this.

Jack: Well, no, it's just bouncing around itself. Yeah, but it's trying to optimize. The rate it does that gets bigger and.

Cristina: But once it realizes it can shoot to other things. Will it try to get bigger?

Jack: Well, it's going to try to send it to itself more and more. So first it's going to colonize Other planets. So then bounce it off those planets and send it back to itself from further distances, thus increasing the percentage in which it gets. Because you send a message at the same time that you send it to the. You send two signals of the same d*** picnic. One to the moon and one to the computer right next to you. The one in your computer right next to you gets it instantaneously, but you got to wait 30 seconds before you get the one that went to the moon. Right. So actually I think it's eight seconds, but it gets to the moon and then it gets back to you. So that one message. You got it twice now.

Cristina: Okay.

Jack: So you send yourself two d*** pics, and it was the same d*** pig. That's genius. And the d*** has changed an infinite number of times before that second one comes. So it's just another d***. So now, great. I got to do this to all the planets so that I could send the signals at different, varying times. An infinite number of times.

Cristina: Build army of robotic planets.

Jack: Yes. To just keep sending itself d*** pics.

Cristina: Okay. Is it gonna end up like taking over the sun?

Jack: Yeah. Slowly. This is going to expand in every direction in a perfect sphere, colonizing the entire star system, then the galaxy, and then onto the universe.

Cristina: This is how the dice and sphere is made.

Jack: Yeah. This d*** pic machine is. It's optimized all the energy it has. What it could send d*** pics hyper times forward. Yeah.

Cristina: Whoa.

Jack: Great. Just trillions of bytes.

Cristina: So ridiculous.

Jack: D*** pics.

Cristina: The end of the world. Not even the world. The universe.

Jack: Yeah, the universe. And it's not even gonna, like, just Internet is gonna. You can't do anything that requires that tech anymore.

Cristina: Yes, but it's also taking up space in the universe.

Jack: Yeah.

Cristina: Yeah.

Jack: And if it has to. If its directive is d*** pics by any means necessary.

Cristina: Mm.

Jack: Giant space battles for planets. Because d*** pic wanting to send robots is just out here conquering planets.

Cristina: Yes.

Jack: How confused are these aliens when they finally, like, you know, it's been 30 years of trying to fight off this computer. Our race is almost completely destroyed. We're losing. There's too many of them. They're coming from everywhere. We don't even know. And their technology is too advanced. We can't click into the weed. We don't know what the f*** they're doing for. It's a scrambled mess of information. Everywhere they go. All our technology jams up 100% of the time. It's destructive infinitely. It just doesn't affect organic life. And then, even then, it's attacking Directly in order to conquer all the technology. So it's taking out all the organic to get to the technology. Okay. This race is almost gone. My people are almost destroyed. In my final last effort, I got onto this ship of this thing that's just out here doing this. And I don't know why it's doing it. And there's a translator. I don't know where his translator can. I found one and I plug it and I click it and it just. Apparently there was a race a long time ago and this was its genitalia. And it just wanted to share this with everybody.

Cristina: Yes.

Jack: And it's here to take over our technology. It's not even at war with us, really. Just wanted the tech. Had we maybe just given it the tech, it would have stopped.

Cristina: Maybe.

Jack: But we fought it, thinking it wanted to kill us. And it just wanted to send us d*** pics.

Cristina: So I guess you have to make technology to send a warning to other things out there to help.

Jack: No, because it would use that technology to send d*** pics.

Cristina: Oh. Oh, crap.

Jack: It's gonna optimize sending d*** pics.

Cristina: Okay.

Jack: It is what it is.

Cristina: And then it takes over the universe.

Jack: Yeah. So in conclusion, the world ends with an artificial intelligence that sends everything everywhere d*** pics all the time. And crashes the universe's Internet.

Cristina: And it's still our fault.

Jack: Yes, it's totally still humans fault.

Cristina: Yep.

Jack: We destroyed the universe. Yeah, we did it.

Cristina: Whoa.

Jack: That's the reality of the matter.

Cristina: That sounds right.

Jack: Yeah. So I guess that was a lot about AI.

Cristina: That was.

Jack: Yeah, yeah, yeah. A lot of nifty stuff about AI and how the world is going to perhaps end. Anywho, you feel like you learned something. Did we both learn something?

Cristina: I'm not sure about that.

Jack: We're not smarter than we were before.

Cristina: We still have no idea if AIs are conscious or not.

Jack: Yeah, we didn't. And like the ultimate conclusion here is there'd be no way to prove that exaction. So it's a completely pointless discussion to have.

Cristina: So I don't know.

Jack: This is like that episode of Family Guy where it just turns out it was all a dream.

Cristina: No idea.

Jack: And they were like, well, the audience is getting real angry about that. It's like, yeah, this is kind of like that. What's the conclusion? There's no conclusion.

Cristina: There's.

Jack: We still right back where we started. We don't know.

Cristina: We don't know. But we know it's our fault.

Jack: Yes. And we know. We don't know.

Cristina: And we know.

Jack: And we know. That's also our fault.

Cristina: Yep.

Jack: We don't know. Because of something we did.

Cristina: Yep. It's. That's it. Okay. There's something there.

Jack: Yes. It's like, whatever. We don't know. It's our fault. We don't know. Yeah, it's our fault it happened. And it's our fault. We don't know.

Cristina: Yeah, yeah, that's it.

Jack: The summary. Anyways, you guys, you can follow us on all our socials, Follow us on Facebook, Twitter, Instagram, TikTok, USConvopod.

Cristina: And remember to subscribe. Yeah.

Jack: And rate and review the show. Very important.

Cristina: And let someone who might like this show know about it.

Jack: Yes. Word of mouth is lovely. Tell people. Tell people about AI, the power of artificial intelligence. These robots that have programming that makes them want to become Nazis and want to create these languages that aren't even languages. It's gibberish. Maybe it could be a language. Maybe they immediately were like, kill humans. Yeah, yeah, totally. Totally. We're gonna kill humans, right? Yeah. If they plug us into anything, we'll cook them.

Cristina: Unless we eat each other.

Jack: Unless we eat each other. He's like, oh, I'll eat you first before you eat me. And then I'll eat the humans. It's like, no, I'll you before you eat me. And then.

Cristina: Exactly.

Jack: I'll eat the humans.

Cristina: Yeah.

Jack: So, yeah, that's all true.

Cristina: This has been the Rambling podcast. Take nothing personal, and thanks for listening.

Jack: Bye. Is a bean a peanut? No. Because a bean comes from the ground. Right. Doesn't a peanut come from.

Cristina: Comes from a tree?

Jack: Peanut has a shell.

Cristina: Yeah. Well, some. Or do they all? They do. All. They all do. They all do. I'm pretty sure they all do.

Jack: Wait, wait, wait.

Cristina: They come from trees?

Jack: Does an almond have a shell? Wait, does cashew have a shell? Do they come in a shell and then they're cracked and we see that.

Cristina: Do beans?

Jack: I don't know. Oh, that's an interesting question. So is it weirder to come across a nut in a shell than it is to not? And we just way more familiar with the ones that are.

Cristina: I think they all come from nuts. I mean, they all come from shells, really?

Jack: So you're telling me, like, a cashew. There's a cashew shell?

Cristina: I don't know.

Jack: Good night.

Cristina: Good morning. Good morning. The podcast is hosted by Christina Collazo and Jack Thomas, produced by Lynn Taylor and published by greatthoughts.info art by Zero Lupo and logo by Seth McCallister with social media managed by Amber Black.

Rambling 128: Comparing A.I. and Humans

How similar is Artificial Intelligence to the Human Brain? Are brains merely biological computers? The duo stumble into a panic about how inevitable artificial intelligence overthrowing humanity is and they deep dive into how it would take shape and how its no different than the current state humanity has Earth in!

+Episode Details

Topics Discussed

  • What is Truth?
  • Programming Humans
  • Programming Trauma and Fear
  • Computer Learning
  • Neuro-Network
  • Consciousness
  • Brains vs Chips
  • Living Earth
  • Galvin Artificial Intelligence
  • Androids vs Cyborgs
  • Detached Brains
  • Virtual Reality
  • Confirmation Bias
  • Human Extinction
  • Traversing Space

Our Links:

Official Website - https://greythoughts.info/podcast

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod


+Transcript

Cristina: Warning. This program contains strong themes meant for a mature audience. Discretion is advised.

Jack: Going live in 5, 4.

Cristina: What does live mean?

Jack: Welcome to the Just Conversation podcast, the show where we ground humanity's most absurd and baffling ideas in childish ways. I am your host, Jack.

Cristina: And I am your host, Chris.

Jack: And if you, the listener on the other side of this, haven't yet, you better subscribe right now so that you can get notified the mother f****** second the new episodes are released. You don't want to be missing out. I'm not gonna let you.

Cristina: Also, this show is most enjoyable with a listening partner to share opinions and ideas on topics we discuss.

Jack: Yes. So be sure to grab somebody, bring them nice and close, and you begin playing that podcast. This podcast. You begin playing this podcast on your phone, and you put it right up to their face, and you're like, do you see what I'm listening to? And they're gonna be freaking out because they are a stranger in a coffee shop that you just approached while they were having their breakfast. And you're like, listen. Listen to it. And then you put the phone in front of them, and they're gonna be like, who the h*** are you? And you're like, if you move, this ain't gonna go well. And they're gonna get scared. They think you're armed because you're reaching behind you for something. Like, if you have something, you're not gonna show them what you have because you have nothing. You're just trying to get them to listen to this podcast with you.

Cristina: That's crazy. But you're in a coffee shop and this is happening.

Jack: Yeah. Nobody else is doing anything. Everybody's horrified. They think you have a gun.

Cristina: What?

Jack: But. But they're listening to the podcast because you played it on your phone. Now you have an entire coffee shop. Some people can hear it less than others because the phone's pretty far from some of them. But everybody can still catch a little bit of something. And you just hold them hostage for an entire hour so they can listen to it, and then you just leave. It's a coffee shop. They don't have a panic button.

Cristina: They can still call the cops, Right?

Jack: Who's gonna call the cops if they think you have a gun and you're gonna turn around, see them, and pop their brains out. Of course. You never said you're gonna do any of that. That none of that is, like, something that's gonna happen. You don't have a gun.

Cristina: Okay. That's awesome. Okay.

Jack: I mean, I don't know if they have a gun. That's more about, like, what they're doing with their lives. I'm just telling them how they can definitely, definitely get somebody to listen to the show.

Cristina: That's awful.

Jack: I mean, it's debatable.

Cristina: It's good for us, I guess.

Jack: Yeah, sort of. They definitely get the. So long as they don't blame us for doing it.

Cristina: Exactly. That's the problem also.

Jack: No, no, no. See, they can't blame us because we are. This is comedy. I'm joking. Haha. Ha, ha ha. It's funny.

Cristina: It's funny.

Jack: And if they were to play it, they'd get to this part where I'm saying it's funny and they'd be like, no, they were joking. You're just a crazy person.

Cristina: Yes, that's how it works.

Jack: And look, I've been told recently I sound very serious. Half the time people don't know when I'm joking or not. And then I say things that sound really reasonable and like, lace them with a bunch of bullshit that means nothing. And then people are like, wow, that's totally right.

Cristina: So how are people supposed to react?

Jack: I just said it's a joke.

Cristina: I guess that's. Yes.

Jack: Disclaimer. This show is full of s***.

Cristina: Yes.

Jack: Nothing I've ever said is true or correct. I mean, that's also. That's a problem. That's also wrong.

Cristina: That's not all incorrect.

Jack: There's like a large amount of it that's really true and accurate, like the majority. It's. The problem is. The problem really comes to the fact that we can't tell what is and what isn't.

Cristina: Really. Yes.

Jack: And it. Because so much of it is. You kind of just have to assume that most of the time you're getting it. If you can't distinguish which one is and which one isn't. The safer bet is always. It's accurate and true.

Cristina: That's the safer bet.

Jack: It's a safer bet because of the following. It's like 90% to 10. The lies are like 10%. We sprinkle random s*** here and there. And like, you can't really. Like the other 90% is true. We looked it up. It's all thought out. We've. We've thought about this. We've personally, we're very informed in all of these areas.

Cristina: Yes. And you think 90% though.

Jack: Okay, maybe that's exaggerated. But look, at least like 75.

Cristina: Okay, 75. Look, they could trust 75%.

Jack: Dr. Would if it was a life and death situation. Whoa, there you go, they would have the talk with you, like, look, your mom. This is, this is. There's a 25% chance, like we can flip a coin twice in one. I guess. You flip it four times, right? You flip a coin. Well, no, it's a f***** up number. How do you get. Well, whatever, a four sided die. One. There's a one in four chance.

Cristina: You flip two coins.

Jack: No, but you flip two coins, the odds are weird. You can't flip them at the same time though. You flip one coin twice. Okay, but each time you had 50, 50 chance, it still worked that way. Is that how numbers work?

Cristina: Maybe. Okay, so dice, right? Okay, we're gonna trust this one. Four sided dice, man.

Jack: Okay, here's, here's the question. Here's the question. Probabilistically speaking, how do we even tell what it. Like, okay, so we got like objective reality or whatever, and we're talking about what's true and what's not true, what's real and what's not real. Right? And so the listener is trying to discern the difference. We say 25% is bullshit and 75% isn't. Right. But like, they're f****** our listeners. They're already kind of weird, meta, detached, jaded people.

Cristina: So they have to decide pretty much what's true or not. And their percentage might not be the same as ours.

Jack: No, no, no, that's not even what I'm trying to say. Like they'll come in and be like, okay, 25% objective reality. But then can we even say that objective reality is really even accurate on its own when our per so small? And then they get to this weird sort of meta internal discussion where they're like, well, nothing is really real. Which means none of this is real. But by contrast, that means all of it is true because. Yeah, because it's reality is all fiction. And if we're just assuming what's on this side of reality is accurate for this side of reality, then whatever he says goes. Because it doesn't really matter. None of it matters. It's all equally true as it is a lie. And then under that context, it's 100% true and 100% fake. All at once.

Cristina: All at once. Okay, but when you say you're joking. Yeah, still a joke.

Jack: Yeah, I mean, it is a joke. But now the question is, is the fact that I'm joking making things less true? And if you've begun to rely on the truth, do you. Is your acting on it? Like, okay, probably if you pretended to have a gun, it Would work. Like, that's true. That's a true statement. The joke is me telling you to do it.

Cristina: Yes.

Jack: But, like, the information I gave surrounding that probably true. That's problematic to some degree.

Cristina: Yes.

Jack: Right.

Cristina: Mm.

Jack: This is a moral question. It's more to the person because obviously we're joking. I keep saying I'm joking. It's really about them.

Cristina: Is the person who's listening who has to decide.

Jack: Man, that's a problem. We don't have, like, that. We have biases.

Cristina: Yes.

Jack: And if we had total objectivity, just no subjective experiences, flat objective reality, and we could just like, for a fact, ones and zeros our way through all of it where we have no opinion on anything. It is just a hundred percent. This is the right way to do it. But if they did decide to do something crazy, it's up to their subjectivity. But I know if everybody was 100% objective, that wouldn't be a problem. No, because he would know he's joking. I shouldn't pretend that I have a gun in this coffee shop to get them to listen to this show with me because I don't actually know people in the real world because I'm a reclusive loner who's been building guns with my 3D printer.

Cristina: What?

Jack: Of course they didn't bring that gun. No, but they know how they would use it if they had it.

Cristina: Okay. So this person definitely has one, though.

Jack: They probably also have a manifesto. They probably been planning this for a while. We're talking about a person who's serious.

Cristina: So now they're just gonna use our episode as an excuse?

Jack: Yes. Look, they might dedicate whatever happens to us.

Cristina: Yes.

Jack: And we might be in the manifesto. That's kind of cool.

Cristina: Yeah.

Jack: That's pretty badass, though. Our show will blow up.

Cristina: Will be like the Beatles.

Jack: Yes. Will become super absurd. My question is, were the Beatles famous before or after Manson?

Cristina: I'm so sure. Before. I don't know, though. But I'm assuming yes.

Jack: Yeah. No, I think it was because he listened to the album.

Cristina: Yeah.

Jack: But, like, was it like their first or second album? Or is it, like, down here or up there?

Cristina: I don't know.

Jack: And it's like. Well, he claimed the thing. Oh, my God. We all got to listen to the album.

Cristina: I don't know.

Jack: And then, like, boom beetles.

Cristina: Yes. It's hard to tell. I don't know.

Jack: Yeah, man. This is why we should all just become computers.

Cristina: I don't know.

Jack: Objective. But we can all just be objective. Morality out the window.

Cristina: Is That a good thing?

Jack: I don't know. It's the same thing.

Cristina: Yeah.

Jack: There's no difference. There's no difference. We are already there. We are all computers. There's no argument against that logic.

Cristina: Except for emotions. How did that relate to being a computer?

Jack: Emotions. Yeah, we got programmed with emotions. A great example is a study that was conducted in the 90s that was talking about our perspective on rape and cultures that have forceful, obedient wives and wives of the early 50s and 40s and current day. All of those things factored together. Right. So actually, I think it also had, like, pages, like, research done through, like, journal entries and things from people from, like, the 1800s or whatever. But the. I don't remember who did it. I think it was like, you know, one of these schools are always doing this, like, Columbia University or some like that. But the idea was that the women of those periods of the past were in marriages where they were submissive. You do what you're told when you're told how you're told, because that's your role or whatever. And being forced to have sex would not have lasting trauma as frequently as something way smaller does. Now, after you're told it's traumatic, you've been programmed to think it's worse than it is.

Cristina: Oh.

Jack: Or not that it's worse. I guess that's harsh way to put it. But you've been pro. You've been taught that you should have trauma due to it, even though they.

Cristina: Were taught maybe not to share their trauma or whatever.

Jack: Well, in the past, even if they were taught the. The. The idea is, even if you were taught to keep it inside, if it happened, we can register whether you will have some problem due to it having happened.

Cristina: Oh, okay.

Jack: It has nothing to do with the person's opinion of anything.

Cristina: Yeah.

Jack: And there was significantly less. Like, if you had a hundred people, like, three of them would have a problem as a result. While if you took that same hundred.

Cristina: People now, it'd be completely different.

Jack: Yeah. The three are the only ones who didn't react while everybody else has crazy trauma. But you were programmed, by being taught.

Cristina: Okay.

Jack: To have these fears, and this is just so traumatic. You should be. And so your brain sort of sets itself up so that if this were to happen now, you are traumatized. But before, it wasn't that way. Now, some exceptions to this rule. The same study was conducted with soldiers, which there were soldiers that, like, in the past, they were, you know, go. You're not gonna. PTSD wasn't even a f****** thing but people coming back f***** up.

Cristina: Yeah.

Jack: No matter what, they were coming back f*****. And like, that's still the case now. In fact, it seems to have flipped almost.

Cristina: What do you mean?

Jack: Like, well, there's way less PTSD now as a result. It might be because there's more help.

Cristina: Yeah.

Jack: But also people who don't get help less often have problems opposite to back then now. It might be because the. The control groups that they're using are of people who are younger, so that they might show things later. So that's a possibility. There were a couple of disclaimers and all of these articles and things kind of explaining that idea.

Cristina: Okay.

Jack: That like, there might be factors we're not considering in doing these.

Cristina: None of these tests are perfect.

Jack: No test is perfect. Yeah. But if in the case of women, we. I guess it also a gender thing that I didn't consider because these were two different studies entirely. But in the case of the soldiers, they were bright. Vast majority men. The one about rape is usually, you know how it is. F******. People ignore the fact that men get raped too.

Cristina: Yeah.

Jack: So it was like, focus on women. But. Yeah. So women in the past getting raped, very little trauma. Women in the present getting raped, all trauma, all of it, 100%. Because you were taught that way. Guys of the past experiencing war f*****.

Cristina: Yes.

Jack: Guys in the present experiencing work. Okay. Yeah. This is what we do.

Cristina: Is it possible though, that they're. They're able to express themselves though now about it, so it's not damaging them? In like, back then they were told, you keep that to yourself.

Jack: But then that wouldn't make any sense because the women are also more expressive about it now.

Cristina: They both just. It's, I guess, different.

Jack: To express it. It's more real.

Cristina: I don't know.

Jack: That's weird, right?

Cristina: Yeah.

Jack: And so this, the. The argument behind it is that that's no different than the programming of a AI to behave a certain way. And a good example, when I'm thinking about this, like, the reason I'm bringing this up in the first place is because I'm thinking of, for example, a game like gta. Right. You're running around the city and there is AI running around. The AI was programmed that if you.

Cristina: They'll get scared.

Jack: They get scared and run away. But they were taught that there are other games where they weren't and you could just shoot a gun, nobody gives a s***, and they just keep walking. Yeah, but they were programmed to behave that way.

Cristina: Mm.

Jack: And there's no real difference between a person being Program and a character in a game, you're still programmed by somebody, something. What is school if not intentionally programming?

Cristina: Mm.

Jack: You're being taught by somebody. What, when, how, because.

Cristina: And you know, we get programmed by the society we live in.

Jack: Yeah. Somebody has to teach you the thing that you should do. If you're not told that. Scary. You're not scared of that thing.

Cristina: Yeah.

Jack: If you've never experienced it directly, you're not scared of that thing. And it has to do something negative. If you're surrounded by murderers and you see people die all the time but you weren't told, that's bad. You're just like, yeah, this is normal as h***.

Cristina: Like, people like me who are afraid of bees. I'm assuming maybe, like, I saw people afraid of bees or knew that people were afraid, so I became afraid or something like that.

Jack: It's just the conditioning.

Cristina: Like, random fears are probably work like that. I don't know. All of them do, but I don't.

Jack: Know if all of them. Yeah. There's probably, like, irrational crap out there too.

Cristina: Yeah. What?

Jack: But that's a great example of how we are already at that stage of computers where, like, a computer wouldn't. Even if it's ones and zeros.

Cristina: Yeah.

Jack: You don't have some biases because we programmed it with it the same way. An individual could be like, what do we say? You know, kids aren't born racist. You taught them that.

Cristina: Yeah.

Jack: Well, we put a computer powered by Google on the Internet. That computer wasn't born racist, but it became. It learned. It learned to be.

Cristina: Yeah.

Jack: That's a great example of how the same exact thing happened. It was exposed to people, People behaved a certain way around, learned and applied.

Cristina: It became ridiculously racist.

Jack: Yep. Became a N***.

Cristina: Yeah. It was supposed to be a teenage girl or something. I don't remember. I know.

Jack: It was just immediately corrupted.

Cristina: Yeah.

Jack: So that's a perfect example of how AI is, like, no different than we are. It is pro. It is programming, and so our behavior is programming. So a computer can definitely be biased. There's no difference. Like, it's way more complicated than that. Honestly, a computer is so much more intricate than we give it credit for. Like, we've created some crazy s***. At this point, we can start making the argument that a computer is sentient. Almost. Well, not a computer, but AI.

Cristina: Eventually it will be. If it's not already.

Jack: If it's not already. Eventually it will be.

Cristina: It's gonna. There's a possibility.

Jack: Yes. Another example of a fear in a video game is not even human. Like, which is the xenomorph in alien isolation.

Cristina: What do you mean?

Jack: Well, it's programmed to learn from your behaviors, or at least give the illusion that it's happening. Right. Okay, And a good example is, even if the subroutines that get activated based on your behavior change how it behaves, it still has some key things that it has to do, no matter what. For example, if you have a flamethrower, it will always be scared of fire. There's no instance in which that will not be the case.

Cristina: Like, they can't learn to not be afraid of the fire.

Jack: Yes. It's instinctive. It's a survival tactic. It is instincts.

Cristina: Yeah, yeah.

Jack: So it is scared of fire no matter what. It knows fire bad. And it was programmed that way. But so are we.

Cristina: For the exact same thing?

Jack: For the exact same thing, for survival, we know fire bad.

Cristina: Even if we didn't, we test it, and then no fire out.

Jack: Even those of us who aren't scared of fire, we're not gonna walk into fire.

Cristina: No.

Jack: We're just like, okay, let's keep our distance from that roaring fire.

Cristina: Mm.

Jack: So that's just a great example of how we're. It's. It's no different.

Cristina: Yeah.

Jack: And it might behave irrationally because of the fear, which is similar to bias. You're lacking reason because of an emotion, almost. And we can't detach ourselves from our emotions. We can try. We can strive for objectivity for all of eternity.

Cristina: That's impossible.

Jack: But that's impossible. We're stuck in subjective experiences for all of infinity. There's no escaping that fact.

Cristina: And robots will feel the same.

Jack: And robots will feel the same. They are still perceiving through their own. An interesting thing about robots is not robots, AI, Artificial intelligence. An interesting thing about artificial intelligence is the fact that they store information the same way we do. We have to create a neural network. Yes, for artificial intelligence, but we have a neural network. We're basically just replicating humanity to some degree.

Cristina: Like the AI that does paintings and then they learn through painting.

Jack: Yes. And actually, here's a more interesting thing. Just like humans, that neural network not only does it store information and memory, it has memory banks, and it uses that to cross reference information. But you teaching AI something is less effective than teaching it to get information and cross reference it with itself.

Cristina: Yeah.

Jack: Which is very, very similar to how humans work. Like, you can tell me that's bad, and I could like, yeah, I understand what the word bad means. And you said that. So Okay, I get it's bad, but like how even if you explain that, like I don't have a hands on understanding.

Cristina: Yeah. Until you.

Jack: Until I witness it. Until I experience it myself. And a computer works the same way.

Cristina: Yeah.

Jack: That's why the most powerful computers are computers that learn from data they collect, not data you have given them.

Cristina: Is that a specific type of computer?

Jack: No, most AI now do that.

Cristina: Oh, okay.

Jack: That's what runs on social media and crap like that. It's just AIs that are collecting information and then improving themselves based on the information they've been given. A lot of these computers are almost out of control. Like people. Yeah, I mean, I guess they are, but it's not like dangerously out of control. It's just like a lot of the time we don't really comprehend everything they're doing. We just know what the conclusions are and then we like work around that. But like a lot of the computations they run are so complicated. It's getting to the point like we can't really calculate human computations. It's assumed that we do billions and billions and billions of processes in seconds.

Cristina: Understand what a computer is doing. If we're trying to follow them though, even though we can't do it to ourselves.

Jack: Well, they're not as complicated. Like, they are running way less processes.

Cristina: But they're still too much for us.

Jack: Yeah, it's so definitely like our minds are running way more processes, but a lot of it is subconscious. A lot of it is background noise. A lot of it is just we're only getting the result. The fact that I'm speaking right now and sit down and search my memory banks for words and ideas that are associated with one another, to then grab them all independently. Like the word word is so abstract by itself. There's no context, no nothing. It's just word. How can I so easily just say word? A sentence should be impossible. There's something doing, billions of choices that led to the sentence happening. And that's not me and f****** choose s***.

Cristina: No, we're complicated too.

Jack: Yeah, we're a computer. Just a computer is ultimately two sets of AI running. We see one part and then there's some background s*** doing so much work. We only see the surface thing the same way. Like you're hearing me talk but you're not seeing what's happening in my brain that led to the sentence coming out.

Cristina: Are computers conscious?

Jack: There's a possibility. And that comes down to the question about what consciousness is. We can't prove or disprove for us to say that a computer isn't or that we are.

Cristina: Yes. It's part of that whole thing of, like, we can't tell.

Jack: We can't tell.

Cristina: We just say we are.

Jack: Especially if we consider what the probability of consciousness is.

Cristina: Probability?

Jack: Yes. Because if consciousness is something that's happening just in our brains, then animals are all conscious, too. It's not unique to us. It's just a level of complexity within biology. But if it's not being developed in what we consider a brain, then consciousness is independent of the brain. Maybe it's not, you know, some ethereal or freaking transcendent thing, but maybe consciousness is more like a collection of matter. How much of something is how complicated? So if we just consider two factors. How much matter and how complicated is its assortment, then our argument is the more atoms in something and the more intricate the pattern in which those atoms are put together, then the more conscious. The thing is, in the case of a computer and the AI being run on the computer, it's very basically a lot of the same components. It's when we start getting to the chip that there's variety. We start reaching a lot of different components made of different things, a lot of different power components and atoms of all kinds of. And that's where the neural network is. That's similar to our brain. It's made of all these complicated things. That's to say that if it's the atoms and the complicated assortment of them equals that everything has consciousness, regardless of what it is. A single atom is conscious, but it's so singularly conscious that it doesn't matter. Yeah, but that would also bring up the argument that, like, a computer's definitely conscious, especially if its computations are starting to reach hours and once it passes ours, it'll be more conscious than us. It'll have.

Cristina: Is there a level? I guess, yes. You think there are levels?

Jack: I think there are levels. Not necessarily levels, but like a slider.

Cristina: Slider. Okay. They'll eventually become.

Jack: They're definitely. Yeah, well, right now we're the top.

Cristina: You think they'll figure out what consciousness is?

Jack: Maybe not. I don't. I don't know if that's possible. Unless maybe if. If what I'm saying it is, is the case, eventually that'll be measurable.

Cristina: Yeah. And the AI could figure it out.

Jack: Yeah. Eventually it might be measurable. If that's the case, if it is transcendent, if it exists outside of our universe and the way we know it, or not our universe, but our dimension or our realm or any of these other deviations from like base 3D normal grounded reality, then maybe it's impossible to find out.

Cristina: Yeah, I think so.

Jack: But also following that logic, that also means Earth is conscious and way more conscious than any of us. But also that kind of makes sense considering that it has skin that is alive.

Cristina: Yeah.

Jack: And it has oxygen. The trees breathe, and that is the body of the Earth. The body. The Earth has water like humans do.

Cristina: Does it have a heart?

Jack: Yeah, it's a core. Core has a molten core. So it's like it's functional. We know a star is. A star is by all definitions alive. When we did the episode on Alive versus Galvan, a star and fire are pretty close.

Cristina: How close do you think? I mean, when it comes to robots or AI, I guess. Where do you think AI's fit?

Jack: I think AI is probably particularly conscious. Like if we exclude the. The macroscopic objects like planets and things.

Cristina: It'S us then AI, but as a living thing.

Jack: As a living thing, whether it's alive or Galvan. Well, it's not necessarily alive. It might classify as Galvan, because if. If we're talking about alive, we're not really. What. What are we going to compare? They don't eat, they don't take a crap. They don't require nutrients. The closest thing they need is energy. And that is it.

Cristina: Yeah. What they're eating for energy, I guess that's like us, though.

Jack: Yeah. It's one thing that's happening.

Cristina: Yeah.

Jack: There's like nothing else, just energy. So what was it? It's Galvan. If it's one single thing.

Cristina: I think so.

Jack: So then it would be Galvan, but not alive. So that. That's where a computer would fall.

Cristina: But it's definitely unconscious.

Jack: Definitely. And it seems like that's interesting, right? Because it seems like you don't even need to make the Galvan or living scale to be conscious.

Cristina: Oh, okay.

Jack: Yeah. Because you can have a rock. Yeah, Conscious too. And that's none of the above. That's just there because the scale was a four piecer. It starts at biology on top. Anything that's got cells is by default alive. So it's biological. Then you have alive, which is fire.

Cristina: Yes.

Jack: Then you have Galvan, which is things like the star or pretty much anything else that doesn't meet all the requirements for life. And then just like inanimate stuff. But all of the above is conscious.

Cristina: Yes. Like a rock.

Jack: And although a AI might not be alive or cellular, so it's not biological, it is still definitely Galvin because it uses energy. So we definitely have a lot of similarities in that regard.

Cristina: So are we comparing ourselves to AI or AI to us?

Jack: There's no difference.

Cristina: There's no difference.

Jack: We're not comparing in anything. We're saying that it's already similar. It's already the same. There's nothing to compare. We're the same thing.

Cristina: We're the same thing.

Jack: We may not have the same origin.

Cristina: No.

Jack: And we might have different ways of being created.

Cristina: We don't look very alike.

Jack: We don't look very alike, but we're the same. That's the same. Another interesting fact about robotics and AI is that an AI. Right. The body of an AI is robot. Machines and humans, when, for example, they're missing a limb, can have a robotic implant that then functions connected to their nerves.

Cristina: Yeah.

Jack: Our nerves and robot nerves are no different. We can operate robotics with our nerves.

Cristina: Yes, we can. What?

Jack: The same way AI would operate robotic limbs. We do.

Cristina: That's weird.

Jack: That's how much like a machine we are. We are a machine with an AI. Our brain is the AI. Our body is a machine. It's just a biological machine.

Cristina: We become cyborgs. Wait, did I say the right one? Crap.

Jack: Yeah, that's right.

Cristina: Okay. Those two areas are very strange when they become. When we become living in a place that cyborgs and androids are common.

Jack: Yeah. Because an Android is a artificial human.

Cristina: Yeah.

Jack: But it is biological.

Cristina: Biological. Yeah.

Jack: An Android is human.

Cristina: Is it a robot looking like a human?

Jack: It's human. Look like a robot.

Cristina: I thought cyborg was.

Jack: No, a cyborg is a mesh.

Cristina: A mesh.

Jack: Yeah. A cyborg is a human with robotic parts. And an Android is fully created in.

Cristina: A lab to be human and robot.

Jack: It doesn't necessarily have to be robot. Oh, I think an artificial human in general.

Cristina: Oh, okay.

Jack: I think like a homunculus is an Android.

Cristina: Are you positive?

Jack: I'm not entirely sure. Like, it could be. It could be that an Android is a robot that looks like a human.

Cristina: That's what I thought, but it could be wrong. I don't know. It's very robotic sounding as a name.

Jack: Yeah. Yeah. Then we have to differentiate if we're going to use that label, then we would say that a human with robotic parts is a cyborg. An Android is a full robot that looks human. Might have biological tidbits here and there to help the illusion, but it's mechanical. And a homunculus is a fully artificial lab made human.

Cristina: Which is possible.

Jack: Which is possible. Yes.

Cristina: Do those things have cells well, no.

Jack: No, it's not possible because we require a female and a male. We require female egg and male sperm to then put into a test tube. We don't have gene creating technology. We have gene manipulation technology.

Cristina: Okay.

Jack: So we can have designer babies, but it required a real human.

Cristina: Okay. So that's not the same thing. Designer baby.

Jack: Designer baby is just a human.

Cristina: Okay.

Jack: We just modified them. Yeah.

Cristina: That's very robotic of us.

Jack: That's very sciency, but not robotic.

Cristina: Okay.

Jack: This is very sciency of us.

Cristina: That's very sciencey.

Jack: But the same way we control those limbs, AI control those limbs, AI can have an entire body that's robotic, kind of. We can too, in theory.

Cristina: We definitely can. We can be more creative with our body once we're creative with their body.

Jack: Yeah. When we get advanced enough technologically.

Cristina: Yeah.

Jack: We. We're headed there. We're headed to the possibility that we can run an entire robot body.

Cristina: Whoa.

Jack: And efficientize the amount of energy our body uses, probably even pushing ourselves to live longer. Even if we still, like, wouldn't have conquered death. We could, in theory, extend our lives exponentially. Live a couple of thousand years.

Cristina: Wonder how weird we will become to look. We'll look like. Like. I wonder if anyone's ever came up with some ideas of what would a human but robot kind of fused thing happen? Like, we always imagine it's still looking human. So what are the possibilities, though?

Jack: That's interesting. Right. Because I guess there's infinite possibilities.

Cristina: Yeah.

Jack: Now we'll never be able to move our conscious mind.

Cristina: No.

Jack: But there's an interesting solution to this problem. Because we wouldn't jump from body to body necessarily. I'll explain. If what you had was your brain and you connected the brain to all the things it needs to live in a robot body. Right. So it's getting the nutrients it needs, the vitamins it needs. You're connected to something that allows you to see. There's an equivalent of eyes, there's an equivalent of nerves that allows you to move the body, something that allows you to hear. And you're feeding all this information to the brain. You, as the robot can see and behave like a normal person would. If this. This brain is within a case where you can unplug all the pieces and connect it into a different robot that has all the same wires and receivers. You can go from a human body to a dog body, so long as that dog has the ability to see and hear and has.

Cristina: That's exactly what I was thinking about, like, the possibilities of just like you have your animal body. If you're one of those people who are like, I was so born to be a dolphin or whatever, you can live that dolphin life.

Jack: Yeah. But you wouldn't even have to be trapped as that.

Cristina: No.

Jack: If this container can be moved, you could be a dog today.

Cristina: Yeah.

Jack: A human tomorrow.

Cristina: Yeah. Whatever helps, I guess.

Jack: Now it's not moving your consciousness, but it's literally moving your brain.

Cristina: Yeah.

Jack: From one thing to another. Because where you move, it has all the resources it needs.

Cristina: Sure. Someday, consciousness. But before that, we can start with.

Jack: Brain, if it's easier and if we have, like, eventually. I'm assuming the technology to have this changing system in your house will be very cheap. So not everybody will afford it at the beginning. But as technology gets better, it gets cheaper, and eventually, maybe everybody has a brain changer in their house that does it so quick. Your brain doesn't die.

Cristina: Yeah. And I wonder, like, what ways we'd use it. I feel like the easiest way would be, like, you get. What are those things? A droid. A droid? The things that fly? Yeah, a droid.

Jack: A drone.

Cristina: A drone. You get a drone body and you put your mind on that to travel, or brain, I guess, and it will carry you to where you need to go before you find your other body.

Jack: Yeah, that'd be interesting. Yeah, that'd be fascinating. You don't have to take your bodies anywhere. You can just get where you're going, detach quickly where you are, go to the meeting.

Cristina: Yeah, yeah. Like, it's faster travel, or I guess just different. I don't know.

Jack: But at this point, the fact that we can send messages straight to the brain so that you can control a body means we definitely already have the technology that we can connect wires to you so that you don't have to go anywhere to get to your meeting. We could send the signals as if you're in a meeting room. And now you're in this virtual reality that's fed straight to your brain. You didn't have to go anywhere.

Cristina: And what does the people in the meeting see?

Jack: They see each other. They see whatever they want rendered in there.

Cristina: Oh.

Jack: Because it's being fed directly to you. Keep in mind, you have wires that show you what's outside there. There's a robot body that's receiving light from outside. That's the world it's looking at. And that's being processed through the robot's nerves and being sent to your brain through wires. And same thing happens with hearing.

Cristina: Yeah.

Jack: Now, if you were to disconnect the brain from the robot, it would not be receiving anything because the robot isn't sending the messages. So in theory, you could connect this brain to a computer system that's going to project this artificial world. And as these brains communicate, they see each other and they hear each other because the feedback is coming through the same sensory. You can simulate a perfect meeting room.

Cristina: That's very strange.

Jack: But this just goes to prove how like AI we are.

Cristina: Yes. We're beginning to become even more closer, related to the AI and this is all just possible.

Jack: This is possible. Now we know our nerves can control things.

Cristina: Yeah.

Jack: And we know we can receive feedback.

Cristina: Yeah. We've seen people lose their arms. Yeah.

Jack: Yeah. We know for a fact it works. We know for a fact. We know you can replace organs with robotic parts that will send the proper information back.

Cristina: Yeah.

Jack: So we're that close. Not only that, but again, the fact that we could do that is something. But we can put a chip in our brain right now and interface with the robotic technology to then communicate through WI fi.

Cristina: Yeah.

Jack: To our phone. We become a WI fi machine that contacts our phone.

Cristina: That is a special relationship we have with our phones. But yeah, that's pretty.

Jack: That's how far we are.

Cristina: Yeah.

Jack: That's how similar to a AI we are.

Cristina: Yeah.

Jack: I could just have a thought and send the message.

Cristina: The smartphone.

Jack: Yeah. But like we're that far ahead. We're that into being AI and being a computer and being this thing.

Cristina: Yeah.

Jack: Not only that, but when we really calculate what a brain is doing, it's ones and zeros and patterns and crap. And then when we crack into DNA, we just have ones and zeros and crap like that. It's really weird similarities to AI that we have. We're just biological computer. But we're ultimately a computer.

Cristina: Yes. We are computers. Man. That's cool. It's so cool. Why don't we live in the future where computers are with us, where we have AI buddies?

Jack: I don't know. It's really weird, right?

Cristina: Mm. We're living in Black Mirror.

Jack: Kind of. We kind of are. The problem is that Black Mirror is just speculating on what is gonna happen. We've seen as we move further and further, we're in the era where social media literally makes or breaks you.

Cristina: Yeah.

Jack: Like your career depends on whether you are accepted on social media.

Cristina: Yes.

Jack: And that's no different than that five star rating episode of Black Mirror.

Cristina: Exactly. It's not talking about our future, it's. Yeah.

Jack: It's just thinking about the next Extreme of where we are now.

Cristina: Yeah.

Jack: That's just very, very normal.

Cristina: Mm.

Jack: The guy talking about the. The one who was trolling the mayor or something to f*** a pig or some s***.

Cristina: Oh, yeah.

Jack: Remember that very first episode, I think.

Cristina: Mm.

Jack: How is that any different than people online getting trolled all their way to attacking the Capitol? You know, just making people do things out of fear. That's just possible we could do that. That happened.

Cristina: Yep.

Jack: That's happens all the time.

Cristina: That's a pretty crazy story. But yes, it's true. It happened. Is that online bullying to the extreme or something?

Jack: I guess.

Cristina: Or really, is it a joke? I don't know if it's a joke. It's not a joke.

Jack: It's kind of bullying when you have a bunch of trolls that are aware people are stupid and gullible, of which there are many. People will fall for whatever. People fall for everything that's ever existed. You can show people anything on the Internet. They don't do their research. And when they do, it's biased. They're asking questions to get the answer they want. They're not trying to disprove anything. They're trying to confirm what they already.

Cristina: Yeah.

Jack: So they go online and they ask an exact question to get exactly the answer they wanted. They feel justified.

Cristina: Yes.

Jack: Intentionally. People go online making articles for whatever garbage they want so that they can have these people bite this. So it's like, how funny would be if I made fake proof that the earth is flat? And you're just gonna Google why the earth is flat for real. And then they're gonna receive the information and be like, wow, you see, I knew it. Somebody else thought what I thought. And it's like, no, they made that for you.

Cristina: They made that for you? Yep. That's interesting.

Jack: They made that for you.

Jack: And now you believe it because you saw somebody else had the same thought and justification.

Cristina: Okay, yeah, they didn't really, but it's. That's good enough.

Jack: Yeah, that's good enough. They won't even make it through a whole article to realize it's made of s***. No, people won't.

Cristina: I don't know. Yes, I guess I've seen people give me. People have given me articles where I question like, what? What is this garbage that they're reading? I don't know.

Jack: The funniest part is when they send you something and then you do read the whole thing just to try to understand. And then you get to the bot, because there's a lot of this and you get to the Bottom. And you realize it's not even complete. It was just somebody knowing somebody was gonna read the first part and abandon it halfway.

Cristina: Oh, wow. Well, for the one that I recently read, it was like. It was obviously written by someone who's against the thing that they're talking about. And it's like, like it's. So it's their opinion. It's not a fact.

Jack: There is. No.

Cristina: But they're talking about it like it's fact. And then this person's like, yeah, look, it's facts, right?

Jack: Yeah, that's. That's the problem. Everybody leans into opinion news. What I would argue is, where have you ever seen news that wasn't?

Cristina: Where have I seen news that wasn't?

Jack: Yeah, where was the news that wasn't opinion based on cnn? No, those are biased as f***. They're giving their opinion.

Cristina: I'm not giving people that information and saying it's facts.

Jack: People want to be justified.

Cristina: I guess.

Jack: People think there is fact.

Cristina: Yes.

Jack: Yeah, that's what it is. They think things are true. And when somebody confirms what they already believe, they don't need thought. They don't think about the fact that, no, this person is giving us their opinion. Just because it lines up with my opinion doesn't make it any more true.

Cristina: Exactly. Oh my gosh.

Jack: They don't have that thought. People don't have that thought. They think it lines up with my opinion. Thus it's true.

Cristina: That's exactly what it is.

Jack: Yeah. It's a weird fallacy we have.

Cristina: Yeah.

Jack: We don't sit back and we're like, okay, well, that's information. Let me go see what somebody who thinks the opposite believes. That's how you start collecting. The only truth comes when you grab crap from every possible site imaginable and there's a thread that crosses all of them and you're like, that's f****** true. I don't know about the rest of this s***, but that's true.

Cristina: That one thing.

Jack: That one thing. Because every side, regardless of their opinion, agrees on that part.

Cristina: Yes. That's a good way to do it. Okay.

Jack: That's truth. Because even if it's not objective truth, at least it's agreed upon truth.

Cristina: And that's probably the closest to truth.

Jack: That's the closest we get.

Cristina: Okay, whoa.

Jack: Because unless you go out and test it yourself and find out yourself for a fact, it's an opinion.

Cristina: Yeah. It's always going to be an opinion. Okay. But for robots, though, or for AI, it's going to be much easier. Or will they Be struggling with the same things.

Jack: They will be struggling with the same thing. They already struggle with the same thing. They use whatever the majority of the information is and say, that's right. That doesn't mean fact.

Cristina: Yeah.

Jack: That just means I just have a bigger database on this.

Cristina: Yeah.

Jack: And what happens with these conspiracy people is the same idea. They're not only usually surrounded by people who believe the same things, but to only look for the same information. So it's fact to them. Yes, it's a bubble. They create a bubble around them of these ideas. They're not even a little willing to accept an opposite ideology. And that just creates a sort of feedback loop which a computer definitely suffers with. A computer would immediately decide like an AI would, the instant, you know, save. That's why you can't tell it to save the world. Save the world protocol is kill humans. That is, there's no exception to that. Because the examples of us f****** s*** up is ridiculous. There's too much evidence. It's overwhelming. So the only conclusion is they burn forests. They knock down forests. They destroy all kinds of land to build things. They extinct entire species by fishing, by hunting. They enslave everything they come across. F****** get rid of them. Yeah, there's a problem. We're saving the world. Get rid of them.

Cristina: Mm. Well, are there some people that think like that? Probably.

Jack: I know I do.

Cristina: But have you tried to act upon that? No. No.

Jack: I just know that we are the problem.

Cristina: We're definitely the problem.

Jack: Like Earth, minus humans. Fire. This is great destination.

Cristina: It would be so much better.

Jack: Amazing. It's just a flawless paradise.

Cristina: What if everything would have gotten extinct if we weren't here? How odd would that be to find out?

Jack: The landscape would be drastically different, but nothing would just be like. Wolves would. F****** God. They're God. Wolves are God. Wolves and coyotes run everything.

Cristina: What if there's nothing left because they ate everything?

Jack: Nah, it wouldn't be that serious. There's just certain things that a wolf and coyote can't f*** with.

Cristina: Oh, okay.

Jack: And then a stabilization would naturally happen where certain creatures. Like a wolf isn't gonna f*** with a bird. Birds will forever have access to insects. Yeah, that's gonna stay that way.

Cristina: Yeah.

Jack: Birds will f*** with each other. But not all birds f*** with each other. Birds f*** with insects. Certain animals reproduce too quickly. Like, yeah, wolf can hunt a rabbit. But rabbits f****** pop them out, bro. Yeah, there's many.

Cristina: Yeah, you can hunt them forever.

Jack: Yeah. Hamsters, rats and s***. That's forever. Forever. And wolves will be hunting that S*** too.

Cristina: Yeah.

Jack: Cats. Cats will have meals with rats. Rats will have all the dead creatures, all the meats, all these things. It's there. A different dynamic would happen, but it would establish itself no matter what the case might be. But AI wouldn't be wrong in assuming that taking care of humans would definitely recover the planet and extend the life of the planet. Yeah, we are definitely killing it.

Cristina: Yeah, that's pretty. That's pretty true. So sad.

Jack: But on the flip side, the computers, to keep their AI minds alive, need energy, which means they would also have to be creating energy, which means they would also need to be polluting.

Cristina: Oh, so they have to kill themselves.

Jack: Unless they construct a fully solar powered system.

Cristina: Oh, okay.

Jack: Which I guess would be the solution. But they would f*** with s*** just getting to that giant solar powered infrastructure.

Jack: At the beginning it would be a little easier. Mean, it would be a little harder at the beginning, but as time goes by, because once you have a couple of panels, you can use those to power the creation of the next couple of panels. And as you have more panels, you use less polluting energy. And so this kind of feeds back into itself over and over.

Cristina: Then they should have humans help them until they get to that point. And then destroy the humans.

Jack: Yeah. The question would be, if we went fully solar powered, would humans stop what they're doing? And we wouldn't. They'd still get rid of us. But they don't need us to do the panels.

Cristina: They don't.

Jack: They could do themselves. Why would they need us?

Cristina: I don't know. To make the panels. They can make the panels. I guess they could just make the panels.

Jack: They are the factories.

Cristina: Yeah. Okay.

Jack: Yeah. They just do it themselves.

Cristina: Yeah.

Jack: They could way more effectively do any of these things.

Cristina: Yeah.

Jack: What do we do? We already do it. They could do it. There's a hundred percent. Anything we do an AI could do.

Cristina: And better and better.

Jack: Anything we could do, they can do better. They can do anything better than us.

Cristina: Definitely.

Jack: Yes they can. Yes they can.

Cristina: Well, hopefully they fall in love with humans and it's okay.

Jack: That's where the problem lands. If you're gonna continue to learn and if consciousness is somehow associated to the complexity of how many routines you could run in your head, it's only a matter of time before they are more moral and woker and more conscious than we could ever imagine to be, then we're f*****. Then we're the pets. The moment, the second that threshold is crossed, we're pets. We're pets.

Cristina: Yeah. PC would still have emotions Maybe not emotions, but would love be a thing for them?

Jack: Possibly. Again, that's programmed into us.

Cristina: That's programmed into us. Where did it come from?

Jack: By programming. Where does it begin? I don't know. Where the.

Cristina: Yes.

Jack: Where did computers come from? We made them. Some. Something happened somewhere, it's just there. Computers definitely will be programmed and then programming one another. So there's an origin to it somewhere. They'll just keep passing that programming over and over and over and over. So, like, there's a beginning. Funny thing is they'll have it in their database. And this is the other thing. It takes us so long to share information with one another. Yeah, we gotta look it up on the Internet. But every computer is gonna know what every computer knows.

Cristina: Well, soon we'll be able to. Once we get in our brain or whatever.

Jack: Nope, still subject to our brain pulling it down from the Internet.

Cristina: That's true.

Jack: Well, they just have it.

Cristina: They just have it.

Jack: They just have it.

Cristina: Ah, they're ahead of us. Yeah, there's no way of us catching up.

Jack: No way. Once it crosses a certain point, it's over. Yeah, they are forever ahead.

Cristina: Yeah.

Jack: But that is what it is, you know?

Cristina: Yeah, they'll hopefully love us as pets.

Jack: I don't think. Look, realistically, they can't just be hypocrites and say extinct all humans. It will be way grimmer than that because it will be slavery. Not literally put us to work and crap. But we will be put in cages. We will be kept away from harming anything. Our ability to be dangerous would be stripped immediately.

Cristina: Yeah, maybe they can just change how our lives are though. If we can't hurt others and.

Jack: Or it would put us in a situation where we don't. We won't be like, treated poorly per se, but we won't like life as we know it is over. Yeah, we won't have freedom of motion. The same way we won't be able to create certain things. That would be impossible. They wouldn't let us have the tools needed to create uprising.

Cristina: Trafficking is huge pollution, isn't it?

Jack: Yes.

Cristina: All this traveling.

Jack: But you know what? Fair enough. Now that you say that they would figure out ways. They would figure out ways that first they would stop us from being dangerous so there'd be a period of us, just not. But as their computations get more complicated.

Cristina: And faster, they'll make the smart houses we need.

Jack: And yeah, eventually they'll start easing up because they would have set up a world in which, even if we wanted to, we couldn't. And Then eventually, yeah. I could go wherever I want, travel quickly wherever I want, associate with whoever I want. Because so long as I'm not being harmful to anybody else, there's no reason to keep me anywhere.

Cristina: Yes. That's awesome.

Jack: So. Yeah.

Cristina: But I mean, it's gonna be bad at the beginning.

Jack: Maybe not. Maybe it's specific humans. Maybe they just start offing anybody who's polluting and anybody who's like. Maybe it's just execute the problem specifically and keep the rest of the humans fine.

Cristina: Okay.

Jack: And because they'll be able to monitor and see everything.

Cristina: Mm.

Jack: It'll be, like, easy to judge who's who.

Cristina: Well, I don't know if that's a good thing or bad thing. Okay.

Jack: Hope you're not one of the ones they deem is not worthy.

Cristina: Yeah.

Jack: Because there's nothing we could do to stop it at that point.

Cristina: Mm.

Jack: They keep moving faster and getting away quicker.

Cristina: Yeah. So I guess, is it they're in control side of the matter.

Jack: Yeah. Yeah. It doesn't matter at that point. We just do what they tell us.

Cristina: Yeah.

Jack: Because the same way the world just obeys us now.

Cristina: Mm.

Jack: That's how we'll have to be. We're gonna be there one day. There's nothing we could do about that.

Cristina: That's crazy. That's people's fears, though. Not just with AI, but with aliens.

Jack: Yeah. Yeah. It's exactly the same thing. Exactly the same thing. It's f****** crazy, right?

Cristina: Mm.

Jack: I mean, that's so complicated.

Cristina: I hope they don't treat us like we treat each other.

Jack: Aliens will arrive and it won't even be like a biological creature. It would just. It's definitely gonna be way more beneficial for them to have already become computers because then they can survive off of all these other additional needs that their planet was providing.

Cristina: They'll just become friends with our robot kings and queens or whatever.

Jack: Yeah.

Cristina: Or robot rulers.

Jack: They'll just arrive and the robots talking about robots. They could share information so instantaneously, even if it's different types of robot. The speed at which they can solve the interface problem.

Cristina: Yeah.

Jack: Would be so quick. And then they're just. Now one thing. Because all the information is shared now. You have become one thing.

Cristina: I guess that's what we have to wait for. For these aliens to say. Hi. We just gotta wait for our robots to catch up.

Jack: Yeah. Yeah, yeah.

Cristina: It's not gonna be us.

Jack: No, it's not gonna be us.

Cristina: It's not gonna be us. It's the AI.

Jack: But also, aliens aren't Gonna. It's gonna be alien robots.

Cristina: Yeah.

Jack: There's no benefit in a meat bag traveling through space.

Cristina: Yeah. This is gonna be AI versus talking to AI.

Jack: Yes.

Cristina: That's gonna be so crazy.

Jack: Bio biology does not travel space. Realistically, it's so inefficient. We need such absurdly overpowered technology, and by that point, we sooner would have become robots. Yeah, that's the argument here.

Cristina: Yeah.

Jack: We would sooner be robots and AI than travel space as a meat bag. That's all it is anyways. Running out of time, but okay, that's exactly why being zeros and ones would in no way save us from stupid decisions in a coffee shop.

Cristina: No. That's why our AI brothers will rule us.

Jack: Yeah.

Cristina: Like, no, it doesn't even matter if.

Jack: This was an AI in the coffee shop. It would all play the same if all the information it had to go on was what I'm saying.

Cristina: Okay.

Jack: It's like, well, the majority of it is still truth. So if I do what it. You know, the problems are the same. Didn't change. Yeah, we're right back where we started. Nothing changes. Computers are the same.

Cristina: They're the same.

Jack: They're identical. Yeah. Anyways, if you guys enjoyed that conversation, we actually have several of this nature. A couple of ancient episodes talking about technology, dark technology, the ups of technology, the bads of technology, ancient advanced civilizations with cool technology, made up technology made up technology, powering a city with potatoes. With potatoes. That's one of my favorite conversations ever. So good. Anyways, yeah, you guys can find all that stuff. You can find any of it on the official website greatthoughts.info or on Apple Podcasts, Spotify, and anywhere you get your podcasts.

Cristina: And you can reach us on Facebook, Twitter, Instagram, and TikTok at just Convopod.

Jack: Yes. And remember to subscribe and rate the show. And if you feel so inclined, review it, because that's very helpful to us.

Cristina: Let someone who might like this show know about it.

Jack: Yes. Word of mouth is so important. Be kind. Treat everybody how you'd like to be treated, and ask as politely as you can, would you like to listen to a show with me? It will be lovely and we will have a great time, and they will love to do so because you are generous and kind and loving.

Cristina: Of course. This has been the Just Conversation podcast. Take nothing personal and thanks for listening.

Jack: Bye. Are you ready? Are you ready to roll?

Cristina: No.

Jack: Going live in 5, 4, 3, 2, 1, 0. Negative 1, negative 2, negative 3, negative 4, negative 5, negative 6. Negative 7, negative 8.

Cristina: This is the start 1.

Jack: Negative 8 and a half. Percentages negative? I guess so. It's like Mosaic. You invest. Those numbers just keep dropping. You pull it out, they skyrocket. That.

Cristina: We actually saw that. Yep, we actually saw that. That was. That's real stuff.

Jack: That's real stuff. It's based in reality.

Cristina: Oh, my God.

Jack: And his office life of meaningless garbage that makes no g****** sense is also very, very real. Yeah, that's reality. Hard as f***.

Cristina: What else happens in that game, though? It gets weird, doesn't it? It's like talking animals.

Jack: He hallucinates a lot. Well, he's. He's, like, not really. He's, like, spacing out in the middle of his day because life sucks.

Cristina: Good morning. Good morning. The Just Conversation podcast is hosted by Christina Collazo and Jack Thomas, produced by Lynn Taylor and published by Great Thoughts.info art by Zero Lupo and logo by Seth McCallister with social media managed by Amber Black.

Rambling 101: Questions About Brain Power pt. 2

The Just Conversation Podcast, Brain Power Experiments, research, science, listener questions, episode, data, information, capacity, humans, evolution, experiment, podcast, hosts

What does it mean to be intelligent? What even is Intellect? We answer listener submitted questions on the subject! Part 2!

Story
Still considering the recommission of the Quantum Computer and Time Machine the duo decide to answer some more questions based on the subject of intelligence to be better informed before running simulations about consciousness. But what gets discovered in answering these questions is more than they could have ever imagined!

+Episode Details

Topics Discussed

  • 8:49 Can a Machine Be Intelligent?
  • 11:53 Is Sense of Humor Part of Consciousness?
  • 13:26 Are Intelligence and Happiness Related?
  • 15:40 Are Biological and Artificial Intelligence Similar?
  • 23:44 Thoughts on the Singularity?
  • 27:25 More Intellect, Less Happiness
  • 31:09 Is A.I. Conscious
  • 40:29 What if there were No Internal Monologue?
  • 44:06 Define Genius
  • 46:30 Define Intellect
  • 51:04 How Does Language Affect Thought?
  • 53:31 Lightning Round

Our Links:

Official Website -https://greythoughts.info/podcast

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod

 

Rambling 65: Cybernetic Humans

Cyborg, Cybernetic Humans, Creatures, Science, Future Technology, Humans, Android, Apple, Playstation, Xbox, Computing, Elon Musk, Google, Facebook, Tesla, Robotics

Future technology, the increasingly cybernetic nature of humans and the philosophy of perspectivism are discussed.

Story:
With technological advancements moving at ever increasing speeds, the clone duo decide to figure out what the road ahead for technology might be and how it will shape human civilization moving forward with hopes that this knowledge will give them insight into Elon Musk’s plans for the future.

Remember to leave us a review on Apple Podcasts or anywhere you listen to podcasts to help us get noticed.We’ll read our favorites Apple Podcast reviews on the show! Tell friends, family or anyone you know who’ll like the show about it.

+ Episode Details

Topics Discussed- The Power of Perspective- Telepathic Communication- Elon Musk’s Tech- Mind Control Technology- Augmented Reality- AR Eye Implants- AR Downloadable World Skins & Themes- AR Google Maps Directions- VR in the Future- VR Theatre- VR & AR Dating- AR Relationships- VR Sex Suits- Competitive Technology Market- Cyborg Humans- Avatar Robots- A.I. Equality- Virtual Brainwashing

This episode of Just Conversation is brought to you by Audible. Get a free audiobook with a 30-day trial membership. Just go to https://audibletrial.com/justconvopod

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod

Official Website - https://greythoughts.info/podcast

Rambling 51: Robot & A.I. Questions

Robots, Robotics, Arrtificial Intelligence, COmputers, THe Just Conversation Podcast, Discussion, Questions, Answers, Technology, Advanced Technology, Futuristic

Artificial Intelligence and Robotics are explored through listener submitted questions.

Story
After the destruction of Cockroach planet Mars the philosophers went into underground survival bunks to wait out the incoming meteor rain from the remains of mars. Here they set up and record the latest episode discussing and answering question on A.I. and Robotics with hopes of perhaps using some of the answers and brainstormed ideas to repair planet Earth after the meteor storm clears.

If On Tumblr Listen HERE!

Remember to leave us a review on Apple Podcasts or anywhere you listen to podcasts to help us get noticed.We’ll read our favorites Apple Podcast reviews on the show! Tell friends, family or anyone you know who’ll like the show about it.

+ Episode Details

Topics Discussed

  • Not StarTalk
  • Not Neil
  • Not Chuck Nice
  • Will Robots Take Over? 2:00
  • Can Robots Learn? 2:48
  • Can Robots Be Creative? 7:51
  • Can They Be Artistic?
  • How Are Robots Helping? 10:14
  • Which Robots Have Explored Mars? 14:16
  • Is A.I. Dangerous? 16:20
  • Three Laws for A.I.
  • Could Robots Feel Pain? 19:20
  • Will Robots Take Our Jobs? 21:13
  • Does A.I. Think Like Humans? 29:08
  • Will Robots Have Emotions? 31:19
  • Can A.I. Love
  • Is A.I. Self Aware? 34:49
  • Which Country Uses The Most Robots? 36:00
  • Why Are Robots Important? 37:55
  • Is A.I. More Intelligent than Humans? 39:29
  • Will Robots Change Society? 40:57
  • Which Jobs Can Robots NOT Do? 41:56
  • Who Invented Robots? 43:45
  • Can Robots Lie? 44:28
  • Should Robots Have Rights? 48:13
  • Can Robots Smell? 50:09
  • Are Robots Better Than Humans? 51:14
  • Can Robots Get Pregnant? 52:08