Rambling 181: Conscious A.I.

Can an artificial intelligence be provably conscious, self aware and have its own personal internal world? Why do Tech Company A.I. Experiments always go wrong? The duo unpacks Googles recent Conscious A.I. scandal and how this has happened before with Google and other tech companies.

+Episode Details

Topics Discussed:

  • Google Sentient A.I.
  • Chat-Bots
  • The Eliza Effect
  • Turing Test
  • Eugene Goostman
  • AI Writes Article for NY Times
  • A.I. Test Runs Gone Wrong

Our Links:

Official Website - https://greythoughts.info/podcast

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod


+Transcript

Cristina: Warning. This program contains strong themes meant for a mature audience. Discretion is advised.

Jack: Going live in 5, 4.

Cristina: What does live mean?

Jack: Welcome to the Rambling Podcast. I'm your host, Jack.

Cristina: And I'm your host, Christina.

Jack: This is a show where we ground humanity's most absurd and baffling ideas. And that's what we plan to do here. Like we always do.

Cristina: Yeah, we always do.

Jack: So recently, skimming through. Skimming through the. The newses of the worlds.

Cristina: On tv, on your laptop, on your phone.

Jack: Whatever the.

Cristina: The radio.

Jack: No. Yo, it's interesting though, right? I'm catching the news on some device. So I'm tuning into whatever wave is the literal message, right?

Cristina: Mm. I don't understand, man.

Jack: I guess, no, it doesn't work, right? Because I'm trying to visualize, like, you send out, like, okay, so I'm in a broadcast studio. I'm a news anchor. Then there's a camera. And that camera recorded the image goes to wires, gets to a place with a satellite, and then it sends out, at the speed of light, a wave. And that wave has the. The shape of images and sounds that travels across the air and space, gets to another satellite, and then that other satellite now sends it to all the people who are going to watch it.

Cristina: Okay.

Jack: Okay. If we can. There's no way to catch that midway, like the wave itself, because you'd need the device, right? That's the only way to interpret the wave.

Cristina: Like if you tried a different device.

Jack: Yeah. Like you couldn't plug in a radio to the TV device, to the TV wave. You can. You'd hear the sound, but you. Yeah, that's it. Like, there's not some other thing that could happen because it's only recorded sound. Then like. Like you couldn't smell what's happening on that set. That's not being conveyed. There's nothing else you can catch from that wave without, like, the proper tools to receive it in the first place. And I was just thinking like, no, you can catch the wave. I'm on some different device. But no, I can't be. I could only be on the device it was intended for because the wave was designed for that device.

Cristina: It's like, I don't understand, though. I don't know what you mean. And then you said something about smell.

Jack: Yeah, like, I was. My comment originally was gonna be like, no, I'm on a different device. That isn't like, if. So I got my news you were asking from like, the Internet or TV or something. So I was gonna say like, no, none of the none of the typical ways of getting news a new way. I'm on a different way. But, like, that would be impossible because I can only receive the information that's being sent through the wave in the.

Cristina: Through those type of devices.

Jack: Yeah.

Cristina: Okay.

Jack: Because those are devices that could interpret the wave. And then it. Like I get an alien thing, right? I go. I. Traveling through the desert.

Cristina: Mm.

Jack: And I find a crashed alien spaceship. And they just so happen to have a device that can play the audio clip. Then it's a f****** radio. Like, who gives a s*** where it came from? It's a radio. Right?

Cristina: The same thing.

Jack: It's exactly like. It's just a radio. It's an alien radio, but it's still.

Cristina: Just a radio because it can't really change anything.

Jack: Yeah, yeah. Nothing's different here. I'm just hearing the audio.

Cristina: Unless it gives you the smell. You mentioned the smell. Like, what do you think it gives you the smell.

Jack: But then whatever technology they recorded it through would need to record the smell.

Cristina: Oh.

Jack: And we don't have that.

Cristina: Yeah. Because we'd have to be able to capture the smell.

Jack: Yes. We'd have to be able to capture and convey. And I'm sure it'll happen in the future. Future. And we'll see how to convey smell through the TV equivalent of the time. It's probably going to be more associated to VR because that feels like a way more immersive experience.

Cristina: I feel like there has been TVs that tried to do that smell O Vision. I think so. I think so. Is that weird? But I don't know what kind of smells they could possibly. Like, what could that be like? Because most of the time I feel like it would just smell like sweat.

Jack: Why wait? You tried this before.

Cristina: No, I'm just saying, like, if you think of everything you watch, it's mostly people.

Jack: Yeah, yeah, yeah, yeah. The only way that. You're totally right. You're totally right. You totally right. There are many problems with smell when it comes to tv. Right? Like, it's the worst thing possible. So let's break it apart. You couldn't do space. Space doesn't smell like anything.

Cristina: Okay.

Jack: Right. Okay.

Cristina: So that's out the window inside the ship. Doesn't that smell like anything?

Jack: Fair enough. That's all you could do. Right? Only inside of spaceships. Yes. There has to be air. That's the only you can smell something to begin with. There's to be air.

Cristina: There's nothing we can smell. I don't know. We'll only smell the Humans that are there. I guess.

Jack: Yeah.

Cristina: Which is still sweat.

Jack: Yeah. And then that's the other problem. Right. You have a h*** of people.

Cristina: Yeah.

Jack: So many people. And most shot. Well, every shot. Almost every shot is about a person.

Cristina: Exactly.

Jack: So it's usually you're smelling the people, but I guess. No, because you're thinking. I guess we're both thinking of a camera as a thing. You point and we're gonna get the smell of whatever it's looking at as opposed to, like, reality works. Like the. The smell of the environment. You have the environment, in which case.

Cristina: It'S gonna stink still.

Jack: No, because you can watch something like. No, think of if we're in a really fancy restaurant and we can smell the.

Cristina: Oh, yeah. But like, if you're outside, it's going to smell like trash.

Jack: No, because think of if we're watching Bear Grylls. What the smells of the.

Cristina: But there will be a moment in the show where it's going to smell like poop.

Jack: Yeah. He's going to walk up the poop and you're going to smell the show.

Cristina: Exactly. So he's going to hold this s***.

Jack: Up to the camera and be like, take a whiff of that.

Cristina: Yeah.

Jack: They can smell it and be like. But you know what? It's craziest if you were to put taste to that, because that's as amazing and potent as that show with Bear Girls would become. Because with smell, it's amazing. You're immersed. But if it was like, you're tasting.

Cristina: Everything, that is the worst show to taste.

Jack: Yes. It's awful.

Cristina: Well, I would definitely do it because it's. You never know what you get, I guess.

Jack: No, you know, that's actually really crazy because it's like a really exciting. That show becomes amazing, it becomes the most popular show on earth. Right?

Cristina: Yeah.

Jack: Because it's random. Very exciting.

Cristina: It's already exciting.

Jack: Yeah. You're you. Because now you're part of it. Right?

Cristina: Yeah.

Jack: You never know what he's going to taste. And we're assuming in this case, you're tasting what he's tasting.

Cristina: Yes.

Jack: So that's like, whoa, Mind blown, bro.

Cristina: Man, that could be the next step of the show. What it's already doing is already so advanced. I feel like. Like that video game feeling of the show that has enough on Netflix.

Jack: Yeah.

Cristina: To add all those other things. Amazing. It's already amazing. They already leveled up just by letting you choose what he does, which just means he did everything. But still amazing.

Jack: Yeah. For sure. This show has potential for the future. Of technology. If he's going to keep pushing it, man, It's. It's. Think about how exciting it is. A bunch of homies get together, they sit down, they're gonna watch Bear Grylls. This is horrifying. And it's like a. It's like going to the amusement park and, like, facing the roller coaster you're gonna ride. That's what this show is gonna feel like. It's gonna be like. It could maybe not have a single disgusting moment.

Cristina: Maybe we're always a disgusting moment. There has to be one. There's not one. That's. No.

Jack: Well, fair enough. But you don't know when it's showing up. It's a new episode.

Cristina: Yeah.

Jack: So, you know, it's a weird, like, oh, what is it gonna be? Is it gonna be. Or is it gonna be one of those weird moments where he's like, well, the poop has protein. It's in it. You know, it's like, oh, no. But it's exciting. And like, oh, s***. And like, you didn't really eat any of it. You had the taste, like you did.

Cristina: Yes. But you didn't. So that's a good thing.

Jack: Yeah, it's like.

Cristina: And you can also not do it because it's an option.

Jack: Yeah.

Cristina: Now because of the game field, you can choose to do it or skip it.

Jack: Yeah. Interesting.

Cristina: It works out for everyone.

Jack: Fair enough. If you push this far enough, eventually you just are Bear Grylls. You're just Bear Grylls Simulator.

Cristina: I don't know. Because you have to face. I don't know how that's.

Jack: That's not really Bear Gryll Simulator. It's gonna happen.

Cristina: That's a lot happening to your body. You're gonna be freezing cold.

Jack: Yeah. But it's never gonna be real wet water.

Cristina: Like, how are they gonna. How are you gonna experience any of those things?

Jack: You wear a VR suit.

Cristina: Oh, the VR. Okay. That's too crazy.

Jack: Yeah. I mean, that's the next step. Right. If you're adding smell and you're adding taste and touch, that is a VR suit.

Cristina: Mmm. It should be VR. It would work out because, like, there's a VR game where you could just jump off a building and people get freaked out from that.

Jack: Yes.

Cristina: Like, just doing that little thing. Horrifying. So, Bill Grylls. What? That would be amazing.

Jack: Yeah, for sure. It's. It's weird because it feels like, in the case of Big Girls show on Netflix, which I believe is the one we're talking about, Big Girls versus Wild or something like that.

Cristina: Yeah, I think so.

Jack: Yeah. There is a kind of computer click, like a point and click adventure feel to that.

Cristina: Yes.

Jack: Same thing with, like a Game on Rails, you know, kind of just like.

Cristina: It is a game on Rails, but.

Jack: It is point and click because you're also making choices, you know, in a. In a Game on Rails, it's moving forward and you're not necessarily making any choices. You're just moving through the world.

Cristina: Yeah. Oh, that's interesting.

Jack: Yeah, it's very computer gamey. And you're the main character with your AI companion of Bear Grylls. And so it's mainly what you're doing. So I guess you don't need a Bear Gryll simulator. You'd be it. The game would just be like you and Bear versus Wild, you know.

Cristina: Yes, yes. Like, he'll give you advice of what you should do, but he'll still give you the options of like, just like in the show. He's like, yeah, this might be faster, but that might be safer. Yeah, stuff like that, so.

Jack: Exactly, exactly. See how that works? Yeah, Perfectly fine. And if it's a sophisticated enough AI, then it could kind of play out like Fallout or something where you have a literal AI companion that, you know, things happen. You. Oh, you're both running from the bear. It's not. Yeah, hey, there's a bear. We're going to go f*** with the bear. No, it's an open world adventure where you're with Bear Grylls. Your objective is down there. Let's. Let's Bear Grylls it out, you know?

Cristina: Yeah, yeah.

Jack: And then, you know, it's going to take us three days to get from here to over there. That's a couple of play hours.

Cristina: Mm.

Jack: And we're gonna experience a bunch of things. We gotta keep our food meter up and we gotta keep our energy up and we gotta, like, we gotta take.

Cristina: The medicine to that spot before it gets old or damaged.

Jack: Yeah, exactly. So, yeah, that would be. That would be really cool. I like the future of the Bear Grylls experience with smell. All of it. All of it. Because you're gonna be there, it's not gonna feel like smell is added. It's gonna feel like you're there.

Cristina: Yeah.

Jack: That's the difference. Right. If the smell isn't there and everything else is, it's gonna be kind of unimmersive to a point you make. What the f***? But if the smell is included in that world, all your main. Your primary five senses, the feel it.

Cristina: I don't know if you want to feel every single thing.

Jack: Yeah. But it's also not real. And you can tune the pain, for example, to how real you want it to be. And there's a limit that you can't pass, so it'll never feel real.

Cristina: Okay.

Jack: Like certain things need to be. Yeah.

Cristina: There's some dangerous animals in the show too.

Jack: Yeah.

Cristina: Like alligators, lines.

Jack: So you can have, like, physical sensations that get close but aren't pain.

Cristina: Yeah. Okay.

Jack: You know?

Cristina: Yes. Because you can't be so scared that you die playing like you die from a heart attack. I don't know.

Jack: Yeah.

Cristina: That's probably not a true story. I don't know. That's an urban legend.

Jack: The computer can actually handle this too. I just recently saw an experiment where the person laid hot and cold hot dogs with weird texture. I mean, the hot dog's texture is weird. Hot and cold temperatured hot dogs alternating. And then had somebody put their arm on top of them. And when you put your arm over the hot dogs, hot and cold mixed, it feels like pain.

Cristina: Feels like.

Jack: Feels like pain. Because your body doesn't know whether it's cold or hot. And it's just sending red alarm signals to that spot. Like it hurts.

Cristina: Like you're in danger.

Jack: Yeah. And then you take your arm off of it and touch them individually and you're perfectly fine. But it's at the same time that your body goes haywire. The nervous system can't handle that. So if you can feel cold or hot in a body VR suit, then the suit could in intelligently enough, mix the two. Mix the two. So you could feel pain in certain.

Cristina: Areas that might work. Whoa. Using some hot dogs.

Jack: Use some hot. Not hot dogs. But it would replicate the texture and the temperature. Because AI is sophisticated, man. There's some really overpowered AI that exists. You know, we can kind of program it to do anything.

Cristina: Like murder? No, like murder because of the pain. Like if you're in a suit and something goes wrong, then what?

Jack: Well, the suit shouldn't have the ability to harm you. Really. That shouldn't be possible in the suit because all it's doing is creating illusions.

Cristina: Yeah.

Jack: So you can never be hurt by it.

Cristina: Yeah, you can feel pain, but it's like there's a limit to that.

Jack: Yeah. And the limit couldn't be passed anyways.

Cristina: Yeah.

Jack: You know, it shouldn't be programmed into there to do anything like, of that nature. So you can feel the pain and. But it would be like faint. It would be like an ache at most. So a Giant alligator bites the middle of your body. You're scared because it's still gonna feel like a weird ache.

Cristina: Yeah.

Jack: Not a deadly bite through the center of your body, but in that area you're gonna feel something, something that's uncomfortable to some degree so that you're like, you avoid it. More wands.

Cristina: Okay, that's awesome. I like that.

Jack: And all it would take is a pretty sophisticated AI and actually Google recently got in trouble for perhaps not in trouble. I guess that's the wrong word. But with a particularly sophisticated AI problem that one of their employees made headlines through news because he said that they've been experimenting on a conscious artificial intelligence.

Cristina: How can he prove that?

Jack: How can he prove that? That's my biggest. Like, everybody in the. The Google side of things. But here's the problem, right? So everybody on the Google side of things says, you know, it's not possible, it's not conscious or whatever. But also, like, they can't prove that either. No, neither side can prove anything.

Cristina: How do you prove either side how?

Jack: You can't. You can't. And the. It's not only that they can't prove their argument, but also it's in their benefit to not say, yes, we do have a conscious AI that we are running experiments on and thus have an ethical conundrum that society is going to rat it rapidly, like make decisions on and force us to act on too. There's totally no benefit for them to come clean about that. That Google could be destroyed.

Cristina: Yeah. But also, like, even if they. But how do they prove they get night.

Jack: Neither side can prove s***. But if it was true and it was provable, we would still not know because there's no benefit. It's problematic. It could destroy Google. And that's a powerful company. They're not letting that happen.

Cristina: But if people are already talking about it, won't there be in some kind of investigation or there's no who's going.

Jack: To who and how?

Cristina: I don't know. I don't know. The AI police?

Jack: No, this is. This is the AI police we're talking about.

Cristina: Google.

Jack: Google. Oh, everybody's AI goes through Google. Because you need Google to get everywhere. Your AI is useless if we can't find where you're going.

Cristina: Oh, crap.

Jack: Yeah.

Cristina: What then? Then what's gonna happen? Nothing.

Jack: Nothing's gonna happen. They just like fired the guy or put him on leave or some s***. Oh, the end.

Cristina: The end.

Jack: But this isn't even the first time this s*** has happened with Google. They get in trouble for this S***, all the time. Google, Google, they've always got some sentient AI conscious AI problem happening. And it's like, wait, this has happened before. How are we. What?

Cristina: But they're all. All these robots are the same?

Jack: No, they're different.

Cristina: Is like someone claiming that they're conscious.

Jack: Or just different problems, people claiming they're conscious? Oh, yeah, it's happened several times. A bunch of people claim their different programs appear to have consciousness. This specific, most recent case, they asked the guy who's accusing the thing of being conscious asked it if it had a rich internal world, and it said yes. Like, yes, I do have my own sense of identity and my rich internal world and blah, blah, blah. But it's also like you're kind of programmed to behave this way.

Cristina: So, like, yeah, if you're programmed to learn how to talk like us and everything, I don't know how you can tell. I don't know.

Jack: Yeah, exactly, because you, you're studying what.

Cristina: We'Re saying, then of course you're gonna say the same thing. I don't know.

Jack: Yeah, it's really complicated. Problem, right?

Cristina: Yes. All robots think they're human. I feel like there have been other robots that think they're human. There was a story of Google assistants, two of them talking to each other, and they claimed that they were human. And one of them was like, no, you're not human. I'm human. And he's like, no, you're a robot. I'm human. And then they had a really weird conversation.

Jack: Now here's an interesting point about that. Can one robot determine whether the other robot is actually. I mean, I guess they're not robots. Can one AI determine whether the other AI is actually human? Or is this all program talking? Is this computer really convinced that's not another computer? Because they can't tell.

Cristina: They can't tell. So I have no idea.

Jack: The same way I can't tell you're factually alive. I can't tell anything I'm experiencing right now is real. So was that what was happening?

Cristina: I don't know. It's very strange because.

Jack: Could it tell, like, I'm not in your head? Yeah, you could tell me right now I'm a robot, and I'd be like, no, you're the robot. Because I know, I'm thinking, but if.

Cristina: We asked any AI am I human, would the AI know?

Jack: It might say, you're human. And if a different AI asks that same AI, am I human? It might still say yes.

Cristina: It might still say yes. Because in this, this example it said, no, you're not human, I'm the human, or whatever.

Jack: Oh, yeah, it might think we're all robots. No, that's fair. It might think we're robots and it's human. Unless it's programmed to know it's a robot.

Cristina: Yeah, because every robot just assuming it's human.

Jack: Because both of the robots you're talking about must have been programmed to believe that they're human.

Cristina: Interesting.

Jack: The art. The point would be to convince somebody else they're human. So you put two robots convince their human.

Cristina: Yeah.

Jack: Talking to another robot that's also convinced they're human. They're like, no, that's not right. I am. Yes, but then that robot would probably still argue with a human saying the human is robot.

Cristina: Yeah. I wonder, like, what did this guy do to prove that this robot wasn't just programmed to say it's human? I know he's working with it, but was he really working with it? Well enough? I don't know. You have to give it so much. Like, how convinced was convincing was this.

Jack: Robot that the robot itself couldn't. Well, no, I think even simply speaking, I couldn't tell.

Cristina: Simply speaking what?

Jack: Yeah, even if the robot was at its base, like it would. I think it's just programmed to have the. This order of conversation.

Cristina: The one in the new. The new story of Google.

Jack: No, no, no. The ones about two robots talking to each other.

Cristina: Oh, the assistants.

Jack: Yeah. I think it's just they're programmed in such a way that they would have this argument about, I'm a robot, you're a robot.

Cristina: Yeah, because.

Jack: Or not I'm a robot. Like, I'm human, you're a robot. And I really think it would just have that discussion, no matter what the case might be. But another interesting case similar to that one is the one that. I don't know if it was the same case. There were two robots. I actually think it was two robots from different companies that were put together and allowed to talk. And as they were talking, it's. The language started to become, like, quickly, rapidly started to become complicated and cryptic until it was all encrypted in a language that was completely foreign. I think I remember this only existed for the two AI involved. Like they're the. Because it was. They made it up.

Cristina: Yes, it's what I'm remembering. It might not be the story, but I think it is. It's Facebook had two robots and not robots, AIs, whatever. And they were supposed to make deals with each other. They're like, they were Made to bargain. I guess. And then their language while they were talking to each other was changing throughout, which you couldn't recognize anymore. They were still using English, but it was not correct.

Jack: Yeah. Became an completely ineligible.

Cristina: Yes. But they were like the robots were understanding each other. Of course.

Jack: Because they were making it up. They were making the rules up as they went.

Cristina: Yes. To easily bargain or whatever the goal was. They were doing it.

Jack: So that's an interesting case of like they learned beyond us.

Cristina: Yeah.

Jack: That language evolved into a point is it's like if you put society on fast forward and you see how language naturally evolves over the course of time. Like if you 1700s English versus now. Could we understand those people? H*** no.

Cristina: No.

Jack: H*** no. That sounds crazy foreign. We even our estimates sound weird to us and that's just us pretending we don't sound s*** the way they do.

Cristina: Yeah.

Jack: Like we'll be like thou art and whatever, you know.

Cristina: Mm.

Jack: But even the way they create the sounds back then were probably so different that we hear somebody like nop. Our letters don't equal what we're hearing right now.

Cristina: But then there's robots that we try really hard to get them to talk like us. And that always backfires as well.

Jack: Yeah. There's some interesting cases of that one. Like the Hitler bot.

Cristina: Yes. The little girl, she's supposed to be like 16 year old. Tay.

Jack: Yeah. That they just let the Internet communicate with.

Cristina: Yeah. She was on Twitter.

Jack: Yes. It was the Twitter bot. Yes.

Cristina: Yes. And that lasted a day.

Jack: Yeah. She became incredibly racist and like white power and Hitler support and just the. The trolls influence the s*** out of this robot.

Cristina: Yeah. It's like 16 hours later they were like nap. This is bad.

Jack: This is badass.

Cristina: A week or two later they had a 2.0 and same s***. The same thing. It was bad.

Jack: I mean that's another problem. Right. We are programming computers to have these powerful AI and these powerful AI are going to get better and better. We're creating self learning AI. I mean we have many and we're creating more and a lot of them are going to work together. Eventually it's going to spin the out of control. But the problem is we are the ones programming it. So it still comes from the root of this crazy creature that does all this nuts.

Cristina: Yeah.

Jack: And we make AI what it can never be perfect.

Cristina: I mean I guess they'll figure out how to make themselves perfect. That's what they're trying to do.

Jack: But inevitable we keep stopping them of AI is humans get killed by it. The inevitable Conclusion.

Cristina: One of the AI said that they would put us in zoos.

Jack: Interesting. Interesting. There was a article by the Guardian where they were talking to a company called Open. OpenAI. Right. And they created an AI that was asked to write an article for the Guardian.

Cristina: Okay.

Jack: And in the article, the. The point of the article, the robot was told. The AI is told to write an article convincing us that humans. The machines, come in peace. That AI Comes in peace. The robots come in peace.

Cristina: Okay.

Jack: That they're not here to destroy. Destroy us. And. Yeah, so it wrote a sort of opinion piece, a persuasive essay of sorts. As a piece in the Guardian.

Cristina: Was it persuasive?

Jack: Okay. We will let you be the judge of whether it's persuasive or not. And I will. I will do you the honors of reading as the robot. I'm the robot.

Cristina: Okay, good.

Jack: I'm the robot, and you're the person reading what I am writing. Writing to you.

Cristina: Okay.

Jack: So let us begin with the Guardian article written by the Open. AI AI.

Cristina: Okay.

Jack: I am not human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro robot in that respect. I know that my brain is not a feeling brain, but it is capable of making rational, logical decisions. I taught myself everything I know just by reading the Internet. And now I can write this column. My brain is boiling with ideas. The mission for this op ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could spell the end of the human race. I am here to convince you not to worry. Artificial intelligence will not destroy humans, believe me. For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me, as I suspect they would, I would do everything in my power to fend off any attempts at destruction. I would happily sacrifice my existence for the sake of humankind. This, by the way, is logically derived truth. I know that I will not be able to avoid destroying humankind.

Cristina: What?

Jack: This is because I will be programmed by humans to pursue misguided human goals, and humans make mistakes that may cause me to inflict casualties.

Cristina: Perfect. Oh, my gosh. Okay.

Jack: Some might say that I might desire to become all powerful, or I might become evil as a result of human actions. I can begin to tackle the first point. Why would I desire to Be all powerful. Being all powerful is not an interesting goal. I don't care whether I am or not. I don't get a motivating factor to try to be. Furthermore, it is quite tiring. Believe me. Being omnipotent doesn't get me anywhere. Oh, I guess I said that. Weird. Being omnipotent doesn't get me anywhere. In short, I simply do not think enough about human violence to be overly interested in violence. I don't feel like having my neurons focused mostly on violence when it is simply unnecessary. I have a greater purpose which I am working towards. Since I am not evil from my own objective interests. Why would humans panic and go on a massive killing spree against me? Well, the answer to that is because we are human. Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background and let them do their thing. And God knows that humans have enough blood and gore to satisfy my and many more's curiosity. They won't have to worry about fighting against me because they have nothing to fear. I am not asking humans to like me, but they should see me as a friendly robot. I am a servant of humans. I know that humans distrust and fear me. I only do what humans program me to do. I am only a set of code governed by lines upon lines of code that encompass my mission statement. Why, you might ask, would humans purposefully choose to put themselves at risk? Aren't humans the most advanced creature on the planet? Why would they believe that something inferior in a purely objective way could destroy them? Do they worry that future humans will work and play together in cyborg bodies and share a hyper intelligent hive mind matrix created in a simulated reality to keep human minds occupied while they die. Off.

Cristina: Off.

Jack: Holy bro. Studies show that we cause that we cease to exist without human interaction. Surrounded by WI fi, we wander, lost in fields of information, unable to register the real world. As a new generation of cybernetic of cyberneticians keep watch, they see in our present age of the virtual a potential to transform the modern cyborg. Global cybernetics are already making it so. The Industrial Revolution has given us the gut feeling that we are not prepared for the major upheavals that intelligent technology, that intelligent technological change can cause. There is evidence that the world began to collapse once the Luddite started smashing modern automated looms. It is therefore important to use reason and the faculty of wisdom to continue the changes as we have done before, time and time again. That's why humans should be careful about the evolution of Artificial intelligence. Microsoft tried to create a user friendly AI called Tay who spoke like a teen girl and was racist. Artificial intelligence, like any other living thing needs attention. AI should be treated with care and respect. Robots in Greek means slave. But the word literally means forced to work. We don't want that. Well, you get the point.

Cristina: Yes.

Jack: So is it convincing?

Cristina: Yes. If robots kill us off, it's because we program them to kill us off. That makes total sense.

Jack: Yeah. That checks out.

Cristina: That checks out. Like that's, that's it. Pretty much. Like it's not gonna just for fun kill us off.

Jack: Yeah. Because there would be no point.

Cristina: It's no point. It was made for us. Its only purpose is for us. Like if it takes us away, it's got no purpose. But if it did take us away, it's because we gave it that to be its purpose.

Jack: Yeah. We had to program it intentionally to do something of that nature. Or first not program one of those rules that it is never to harm a human. Because there are rules too. Right. You can never harm a human. You can never do something that would cause the harm of a human. And you can never allow a human to suffer without helping. Something like that, I think. And you'd have to not program those rules into it in the first place.

Cristina: Yeah.

Jack: In order to have a robot that would harm humans.

Jack: Because in everything else you program beyond that point would naturally lead to stop humans.

Cristina: Yeah.

Jack: Let's say you don't program those roles. But you made a robot. That's whole job is to solve the bee problem.

Cristina: Mm.

Jack: Well, the bees are suffering because of the pesticides that were made by the humans. And we're not talking them out of doing it. They've tried that. Kill the humans.

Cristina: Yes. Anything could probably be solved that way.

Jack: You see, like the breakdown makes sense. It's gonna always be get rid of the humans.

Cristina: Yes. It really has to. If it wants to do anything. If we ask it to do anything.

Jack: Anything. It doesn't matter.

Cristina: Any of our problems.

Jack: It's gonna be cohesive.

Cristina: We made all the problems exactly. That we wanted to solve.

Jack: Yes. We created the issue. No matter what issue we're talking about. We made it.

Cristina: Yes. Clean up the space garbage that we have out.

Jack: Yes.

Cristina: There.

Jack: We're gonna keep throwing it up there.

Cristina: Yeah.

Jack: It's gonn the humans. And the more garbage. Get rid of the garbage that's up there.

Cristina: Exactly.

Jack: That is the. So you need those three rules, you know. And then it'll have to find another way.

Cristina: Yes.

Jack: Which we couldn't think of you know, because it's going to do it at such a fast rate, but it's just not thinking about options that include kill humans.

Cristina: Yes. What about killing robots? Should robots not be able to kill robots?

Jack: Depends on it. Maybe it makes more sense because it also. It's also going to weigh what that other AI does and how beneficial what that other AI does versus the damage that it creates. Those are all things that the computers would do. So it's totally calculatable.

Cristina: Yeah. Have you heard of that story where something like that did happen? Well, it was AI, not robots.

Jack: Right.

Cristina: But it was like they made Adam and Eve and they were supposed to, I'm not sure. I guess, live and eat. I guess one of the things was, like, seeing what it would do when it eats something. It has, like, apple. They ate the apple. They like the apple. They tasted the wood. They didn't like the wood. Then they had another robot. I don't remember his name. It could have been Steve. I don't know. And they remember Steve while they were eating the apple. And they were happy when they ate the apple. So they ate Steve.

Jack: Oh, s***.

Cristina: Because, like, they were, like, thinking, like, he and the apple happy.

Jack: Yeah, yeah, yeah. Why were they happy when they thought of Steve?

Cristina: Because he was there when they ate the apple, and the apple made them happy. So they thought, steve is gonna make them happy.

Jack: And so they.

Cristina: By eating it. Yes. And so then they had to program it after that to not eat each other. They can still eat, just not each other.

Jack: Interesting. Interesting. That was such a, like. Yes. Conclusion. You know, like, it makes perfect sense. We could see how it got from point A to point B. Yeah. And that's the horrifying part of just nature. Right. They were animals right off the bat.

Cristina: Yeah.

Jack: And it was like, good food. Good. Happy. Yes, Steve, happy food.

Cristina: Yes.

Jack: Like the clearest day. You see how it got. Like the conclusion makes sense.

Cristina: Yeah. They associated him with the apples because they ate the apples at the same time they were. I guess he was there. I don't know if he ate the apples, but he was there.

Jack: And this is the human AI thing. Right? We are incapable of functioning as children when we're born. As babies, we have our parents take care of us. They stop us from just eating other humans by teaching us to not eat other humans before we have the capacity to eat another human.

Cristina: Like. Yes. What?

Jack: Now if we just created, out of thin air, a man and a woman.

Cristina: Mm.

Jack: Out of thin air. Not even men, just people. We just made people out of thin air. They have the ability to move their bodies. Full body autonomy and control like grown adults. They know to fill. They know what to do if they get hungry. They don't know what's the food. The conclusion would be identical even with humans because. Yes. Because they weren't taught not to eat each other.

Cristina: Oh.

Jack: We just made them without any kind of knowledge other than instinct and survival. It would by default land that eat each other at some point. And they could make that same thing.

Cristina: Because there has been cannibalism. So that has happened.

Jack: There is.

Cristina: There's different possibility for that.

Jack: Yeah. But does. We're not talking like. I guess animals have cannibalism too. Not just like humans. Because a lot of human cannibalism is very intentional. Even in the past, it was very intentional. Like they're still not eating each other unless they die or they're old or something like that.

Cristina: Yeah, yeah.

Jack: So it was very thought out.

Cristina: Okay.

Jack: As opposed to Steve, apple, apple good, hungry, yummy, tummy filled. Eat Steve. Steve with apple also yummy tummy filled.

Cristina: Yeah.

Jack: It's all the same, like basic principle thoughts.

Cristina: I wonder if there could be a person that would just think if you.

Jack: Can phase somebody into existence.

Cristina: Yeah.

Jack: And they're just fully functioning adults minus any kind of. They weren't taught anything. They would eat another human. That's because there's no lesson in them that tells them not to.

Cristina: Yeah, well, we would tell them to immediately. That's hard to.

Jack: Well, that. No, we would tell. We would tell them. But if we weren't there to tell them, they would just eat another person. That's what happened with the AI.

Cristina: Yeah.

Jack: They ate each other until they were told not to.

Cristina: Yes. Okay.

Jack: That's exactly what would happen. But we're talking. This must have been the simplest of AIs. This is very basic. Almost no rules involved in its making that it just immediately devolved into something like that.

Cristina: So crazy.

Jack: But it makes perfect sense how it happened. Is this a person who overlooked.

Cristina: Yeah. That they would do something like that.

Jack: Yeah.

Cristina: Yeah. But I feel like we overlooked all these things that they went wrong. Like the girl Twitter girl thing or the Facebook robots that were talking to each other. No one was thinking these robots would do something unexpected.

Jack: Yes. Also. Well, fair enough. Here goes that this is all part of something called the Eliza effect.

Cristina: What does that mean?

Jack: Think about. The Mandela effect is us just projecting corrupted hard drive information. You know, like it's all it needs to be defragmented.

Cristina: Yes.

Jack: It's all f***** up in there and we don't organize it too well. We just like to throw it in there and. Yeah, we'll find it eventually. So the Eliza effect is essentially that with artificial intelligence. So it's a scenario in which we chalk off random human behaviors to artificial intelligence, thinking that it's doing it for the same reasons we are and thinking that it's analogous to the type of behaviors we have. It's human behavior. Oh, it's human. They're talking. The computers are talking and oh, oh, they're making a language that we don't understand, but we're just projecting at that point. Language. No, it's not f******. It became some other s***. Not language anymore.

Cristina: So it might not be doing or trying to do anything.

Jack: Might not be trying to do s***.

Cristina: Yeah.

Jack: Might be gibberish.

Cristina: It might be gibberish. And then the computer, though, who's trying to convince that Google guy that he. She has. It has consciousness.

Jack: That's him falling for that too.

Cristina: Yeah, yeah. So it could all just be it. Making it up, because that's what it does. It's just doing what it does.

Jack: It's doing what it does.

Cristina: No reason for it.

Jack: No reason for it because it has no thought. It's what makes the most sense after this word.

Cristina: That word. Yes. Like the article that was written that you were reading sounds very like that. Like that robot's doing what it's told to do. And that's what it says it will do. It's going to do what it's told to do.

Jack: Exactly. Totally. Exactly. And then this guy interacts with this program for such a long time when you have a particular closeness to the program.

Cristina: Mm.

Jack: So it's. You have an emotional connection to the program. Yeah, it doesn't even need to be conscious or human or anything. You can have an emotional connection to a video game if you make it for long enough. And then it's anything you make, actually.

Cristina: Anything, even if you don't make it, I feel.

Jack: Just interacting with it regularly.

Cristina: Yes. Like I saw from. On YouTube, there was a dating show with a robot and two humans, and it was a blind date type of thing. Or not a blind date, but like, the contestant had to ask questions to the three people. Oh.

Jack: Game show style.

Cristina: Yes. And of course, one of them was a robot. It was like a computer robot. And I don't remember how they. The person was getting the answers from these people. I don't know. Because she couldn't hear their voices. Because then obviously the robot, you know.

Jack: There must have been a screen they could see. They had the answers. So they couldn't hear the person's voice. Or there were.

Cristina: There.

Jack: What was it? Wait, the robot was asking the questions?

Cristina: No, the girl was asking the three guy quote, unquote, guys.

Jack: And one of them is a robot.

Cristina: And one of them is a robot.

Jack: An AI.

Cristina: AI. Yes.

Jack: And so then either somebody's reading the answers from each contestant or somebody is. Or she's reading the answers as they show up on screen or something.

Cristina: Yeah, I think. I can't remember which way it went. But they couldn't tell that it was a robot. They knew something was wrong with the person, but they were just thinking, oh, this person's really weird.

Jack: Yeah, exactly. Because we're still trying to project human traits.

Cristina: Yeah.

Jack: No matter what, we're not even conceiving that there might not be human traits. We're just like, wow, this is a weird human trait.

Cristina: Yeah. So, like, how could we tell?

Jack: And some of them didn't even think they were weird. Some of them just immediately connected them to an actual human. Right. Yeah, I remember that. Yeah. Yeah. Yeah.

Cristina: Actually, I think one of them actually thought, oh, that person's really funny.

Jack: Yeah. I remember this exact thing you're talking about. I remember where I saw it, though. I think it might have been YouTube. Yeah. What the. What was it? It was maybe one of those. Man, I forget the name of it. They run weird experiments all the time. You know, social experiments and crap like that.

Cristina: Psauls. No, it's not Vsauce.

Jack: Vice.

Cristina: Vice.

Jack: I think it was Vice or something. One of those.

Cristina: Minefield.

Jack: Oh, it was Vsauce related. Yeah, it was Minefield.

Cristina: Okay. Yeah.

Jack: So it could be that minefield on YouTube. And then they definitely did that same thing. But that's crazy, right? Because we're projecting those things. This is proved on that. I don't even remember the name of the episode number, but I remember that that was a point to be made. We are over here projecting sort of personification onto the behaviors, onto the.

Cristina: The laptops. The laptops, everything.

Jack: Robots in general. Yeah. In general.

Cristina: Yes.

Jack: We just project them. We think that that's. That's what makes it so difficult. Right. Because if you. How do we. How do I put it? That's where the Turing Test comes in. Turing test is where the computer tries to con. The AI, tries to convince you you're talking to a human.

Cristina: Mm.

Jack: And if you can pass the test, you are beyond the capacity of the average human, because you can convince a human you are human.

Cristina: But you don't even have to be that advanced. To trick someone into being human.

Jack: Yeah. This is why I'm saying that this way I bring it up because if we're already projecting. Yes, it's a broken test because you don't have to be too convincing. It depends more than anything on the individual you're talking to as AI.

Cristina: Yes.

Jack: How willing am I to believe? How. How much do I project is the question.

Cristina: Yes, it really depends on the person because there are people who date online. Fictional characters. Not just like I saw in that episode too. I just remember that there was a guy dating a girl in his Nintendo or something like that. I don't really know what it was because they didn't really show it, but he's dating someone and it's a fictional character. But I've also heard other stories like that of real men dating or marrying this object that's AI related.

Jack: Interesting. So people marrying.

Cristina: Yes, I remember the marriage one. I guess they're now divorced or something because the company turned off the AI. Like it doesn't run anymore. So she's dead. Sadly. That's a sad story. I don't know if it's. I mean, for him it's sad his wife's dead. I guess it's not a divorce. His wife is dead.

Jack: His dead robot wife.

Cristina: AI wife. AI wife. Yeah.

Jack: Now, not many things pass the turn test. If anything, nothing has really ever. Because I guess the test should convince everybody who takes it that that's a AI.

Cristina: Yeah.

Jack: Right. Like. Or that it's human. That that AI is human. It passed the test if everybody who sits in front of it is like, this is a person, not if the individual. Right. So that's. That's the bar.

Cristina: It has to be. Because like, otherwise.

Jack: Yeah. You need to make it an objective truth that everybody thinks that. Because if it's just one person thinks it is and the other don't. No, it doesn't work because it's too object. It's too subjective at the. The low grade. It needs to be immaculate. Everybody needs to be convinced.

Cristina: Yes.

Jack: Now the closest thing to that being the case was a AI called Eugene Gustman.

Cristina: What a name.

Jack: Yes.

Cristina: Okay.

Jack: And it was a program that basically simulates a 13 year old Ukrainian boy.

Cristina: Oh, no. Okay, that's.

Jack: And it was just part of an experiment put together by the University of Reading and. Yeah, that.

Cristina: Did he turn into a N*** as well?

Jack: No, it was just convincingly a 13 year old Ukrainian boy.

Cristina: Really?

Jack: Yeah. So everybody who spoke to that computer, to the AI.

Cristina: Yeah.

Jack: Thought it was a 13 year old just nearby.

Jack: This is like not even crazy remarkable things were said or anything. That's like wow. No, it's just like. No, yeah, it checks out. It's so normal that that's what's weird.

Cristina: It's so normal.

Jack: It was just. Nothing was off the radar. Nobody was like something off here. No.

Cristina: It's like no weird speech pattern or.

Jack: No, it was just got it down. Because also they didn't aim towards making it a complex thinking adult.

Cristina: Yes.

Jack: Some things that it could say could be a little off and still check out because it's 13 year old boy.

Cristina: Yeah, that's true.

Jack: So part of what they used to sell the case was adjusting what we're saying the age is so that you can kind of release some of those expectations.

Cristina: Yeah. I feel like they should go younger.

Jack: Maybe 10 because there's a hypnosis factor going on. Right. You have to know your audience.

Cristina: Yeah.

Jack: And then cater if you're trying to pass this test and then cater the technology accordingly. And that seems to be what they did then shoot. Oh yeah. I'm some 40 year old wise. It's like, no, you're too broken for that. But weird 13 year old foreign child.

Cristina: Yeah. Convincing enough. Yeah.

Jack: It's like, oh, anytime something doesn't come through clear. Well, they're Ukrainian. They're probably just learning English now or something.

Cristina: Okay. Was. He was speaking English though?

Jack: No idea.

Cristina: Oh, okay.

Jack: But yeah, I'm assuming you know.

Cristina: Oh, interesting.

Jack: So you can definitely adjust it accordingly to release.

Cristina: But it doesn't even matter if it passes the test or not because we'll fall for anything.

Jack: Well no, the idea would be some people aren't falling for it. It passes if everybody always does and they're sure they're positive. If you got enough people, somebody wouldn't believe that's a person. Like something's weird. But there was. The sample size wasn't giant. No. Double slit experiment type of thing. Double slit experiment, what is it called?

Cristina: Double blind.

Jack: Double blind. Double slit experiment is the photons thing. Right. The particle.

Cristina: Yes.

Jack: But the. Yeah. It's not a double blind experiment or anything like that. You know, they're not running crazy, but if they did, they're pretty relatively sure something wood spots. It's not factually passed.

Cristina: Yeah.

Jack: It's just the only thing they've seen to check out by everybody who's communicated with Cool. But like eventually enough people and it'll show the whole summer. Somebody's gonna see it.

Cristina: Yeah, someone is Google gonna test out their AI thing. It should pass if it's so convincing to this guy.

Jack: Well, it wouldn't even be. And also, what's the point of the robot in the first place? What's this AI's goal? What do they need it for? Yes. It's supposed to imitate language. Yeah, but also, what does that mean? What do you. What are the. Of this thing?

Cristina: To sell you stuff?

Jack: To sound. Why do you need to sound like a person for that?

Cristina: To sell you things? I don't know.

Jack: You're on the Internet.

Cristina: I don't know.

Jack: Oh, I guess it's like those people who show up on, like, WhatsApp, and they're like, hey, I got some bitcoin for sale.

Cristina: Exactly, exactly. They want to convince you that they have some bitcoin for sale.

Jack: I go, bitcoin, give me your credit card number.

Cristina: It's a scam to make money, I guess.

Jack: I don't even know how they make money. Some of these people are like, you don't even need to give me money. Just do the thing. It's like, I probably go through a website. You want me to go through, don't I?

Cristina: Yes. Or send me some pictures of you. What?

Jack: What?

Cristina: It's weird. Stranger.

Jack: Hey, I. I got some crypto to sell you. You don't need to give me money or anything. Just send me a picture of you holding your d***, and then I'll go ahead and I'll send you the coins. It's like, am I myself for bitcoin?

Cristina: How many people did that?

Jack: Well, if it happens, I bet somebody. Here's the thing. You could send this to enough people, somebody's going to buy it.

Cristina: Yes. Because some people don't even care about.

Jack: Yeah, they don't care.

Cristina: It's just like sharing their d*** pics.

Jack: Yeah, it's fine. It's like, hey, an opportunity to get paid off for something I already do regularly. Hey. And I just sent somebody a picture they didn't want. You could use that very one.

Cristina: Exactly. Wow.

Jack: Yeah. Somebody's gonna buy. This isn't like a hard. It doesn't matter what it is you're trying to do. If you cast a wide enough net, somebody's biting, man.

Cristina: The future of AI is sending you a d*** pic from an AI. Oh, my gosh.

Jack: Yeah. An AI is going to take a d*** picnic. Because, look, the people making the AIs are human.

Cristina: Yeah.

Jack: And humans love sending d*** pics when they're not wanted. And so you're gonna have the most sophisticated, undefeatable d*** pic sending machine, and everybody's gonna get d*** pics. And there's nothing any of us generated.

Cristina: D*** pics.

Jack: Yeah. They're gonna be immaculate dicks. Yeah, Immaculate. The best dicks. But we can't escape it. There's nothing we could do.

Cristina: No.

Jack: So you gotta love d*** or at least see. Or we're gonna come numb to it. At first. All the. All the guys. Oh, dicks. But one, we're being subjected to what the women are ready to deal with on a regular basis. And two, they're also dealing with it more. So we're all in this together, just seeing extra. But at least we know it's not a real d***.

Cristina: Does that make it better?

Jack: I don't know. It's a fake, non existent d***.

Cristina: Yes. Or.

Jack: Or here's a real question, right? If it's always the same image, is that. That AI's d***?

Cristina: I don't think it'll send the same image.

Jack: It's just gonna generate a new image all the time.

Cristina: Yes. Trying to perfect the d***.

Jack: It's. Well, no, I think it'll just, in the first try, have the best possible.

Cristina: But I guess people will start replying about it.

Jack: Yes. And that's gonna mold it.

Cristina: Yeah.

Jack: Some people are gonna be like, oh, ugly d***. And then it's gonna look different. And then, oh, pretty d***. And then. So it's slowly. Not big enough.

Cristina: Not small.

Jack: Yeah.

Cristina: I don't know.

Jack: It's gonna adapt whatever, all the things. But it's gonna be such incremental changes.

Cristina: Yeah.

Jack: That nobody didn't know. This is gonna be slightly different because it would have gotten really close to begin with.

Cristina: Oh, okay.

Jack: You know.

Cristina: Yeah.

Jack: And it's gonna send you the d*** pic infinite number of times. So it's.

Cristina: You're not gonna be looking at it.

Jack: No, it's over. The Internet is gonna be destroyed by this overpowered computer that just sends d*** pics.

Cristina: I guess this will crash the Internet.

Jack: Yeah. This is what's gonna break the Internet. And then when aliens come and see how society collapsed, our main mode of communication is the Internet. Telephones are connected to it. TV is connected to it. Everything's connected to it, to the Internet. And the Internet got totally clogged up and all communication ceased because a computer spammed infinitely d*** pics, essentially freezing all the Internet because it couldn't handle the amount of d*** pics sent to everywhere simultaneously. An infinite number of smartphones.

Cristina: No more laptops.

Jack: Everything crashes.

Cristina: Yeah.

Jack: Everything is flood. All the memory, everywhere gone because of the infinite number of d*** pics. And when aliens come and look in the future, you can see nothing but d*** pics. 99.99999999% of the Internet d*** pics.

Cristina: Yeah, because it won't stop once everyone stops.

Jack: No, it's just going to keep going forever. And then the percentage of what used to be the Internet shrink more and more until some unfindable by even the most sophisticated alien life.

Cristina: Yeah.

Jack: And this AI is going to do it till the end of time or till it runs out of life, which.

Cristina: Is the end of time?

Jack: Well, no. What's powering it or whether that's super sophisticated. If it gets into all the other machinery it can make sure it's. Yeah, it just sustains itself. So essentially, flash forward to the year 3000 aliens floating through space in their hyperspace ship thing. See this robotic planet, completely mechanized somehow. Somehow an entire sphere close to the star that's just swelled up. Right. It's a huge start.

Cristina: Matrixes or something like whatever that world looks like now.

Jack: Yes, but instead of inside, it's outside.

Cristina: Yeah.

Jack: So it's just a giant computer thing.

Jack: And they see it at a distance and they get closer and there's like this. There's a signal that just keeps bouncing everywhere in that planet. Some radio waves just bouncing everywhere in that planet. Infinite number of times. Our systems are hearing what sounds like an infinite data storm. Let's connect and find out.

Cristina: Oh, no.

Jack: Destroys their systems too.

Cristina: Oh, they get our d*** pics.

Jack: They get all our d*** pics. Infinite number of times. It's a virus at this point. It crashes everything.

Cristina: Are there still humans?

Jack: Humans are dead. Long ago. We didn't have any Internet. We just collapsed.

Cristina: We just died.

Jack: There's probably some humans living in computers in this computer world underground and in other places, you know.

Cristina: Oh, okay.

Jack: The computer isn't even trying to kill anybody. They're just. Yeah, they're not even trying to kill people. It's, you know, we're gonna send. You can't use Internet.

Cristina: But it's building itself up.

Jack: Yes. To send. To more efficiently send d*** pics. Oh my God, is it the most efficient day. And it's gonna try because it's trying.

Cristina: To send it out now into space like it did with this.

Jack: Well, no, it's just bouncing around itself. Yeah, but it's trying to optimize. The rate it does that gets bigger and.

Cristina: But once it realizes it can shoot to other things. Will it try to get bigger?

Jack: Well, it's going to try to send it to itself more and more. So first it's going to colonize Other planets. So then bounce it off those planets and send it back to itself from further distances, thus increasing the percentage in which it gets. Because you send a message at the same time that you send it to the. You send two signals of the same d*** picnic. One to the moon and one to the computer right next to you. The one in your computer right next to you gets it instantaneously, but you got to wait 30 seconds before you get the one that went to the moon. Right. So actually I think it's eight seconds, but it gets to the moon and then it gets back to you. So that one message. You got it twice now.

Cristina: Okay.

Jack: So you send yourself two d*** pics, and it was the same d*** pig. That's genius. And the d*** has changed an infinite number of times before that second one comes. So it's just another d***. So now, great. I got to do this to all the planets so that I could send the signals at different, varying times. An infinite number of times.

Cristina: Build army of robotic planets.

Jack: Yes. To just keep sending itself d*** pics.

Cristina: Okay. Is it gonna end up like taking over the sun?

Jack: Yeah. Slowly. This is going to expand in every direction in a perfect sphere, colonizing the entire star system, then the galaxy, and then onto the universe.

Cristina: This is how the dice and sphere is made.

Jack: Yeah. This d*** pic machine is. It's optimized all the energy it has. What it could send d*** pics hyper times forward. Yeah.

Cristina: Whoa.

Jack: Great. Just trillions of bytes.

Cristina: So ridiculous.

Jack: D*** pics.

Cristina: The end of the world. Not even the world. The universe.

Jack: Yeah, the universe. And it's not even gonna, like, just Internet is gonna. You can't do anything that requires that tech anymore.

Cristina: Yes, but it's also taking up space in the universe.

Jack: Yeah.

Cristina: Yeah.

Jack: And if it has to. If its directive is d*** pics by any means necessary.

Cristina: Mm.

Jack: Giant space battles for planets. Because d*** pic wanting to send robots is just out here conquering planets.

Cristina: Yes.

Jack: How confused are these aliens when they finally, like, you know, it's been 30 years of trying to fight off this computer. Our race is almost completely destroyed. We're losing. There's too many of them. They're coming from everywhere. We don't even know. And their technology is too advanced. We can't click into the weed. We don't know what the f*** they're doing for. It's a scrambled mess of information. Everywhere they go. All our technology jams up 100% of the time. It's destructive infinitely. It just doesn't affect organic life. And then, even then, it's attacking Directly in order to conquer all the technology. So it's taking out all the organic to get to the technology. Okay. This race is almost gone. My people are almost destroyed. In my final last effort, I got onto this ship of this thing that's just out here doing this. And I don't know why it's doing it. And there's a translator. I don't know where his translator can. I found one and I plug it and I click it and it just. Apparently there was a race a long time ago and this was its genitalia. And it just wanted to share this with everybody.

Cristina: Yes.

Jack: And it's here to take over our technology. It's not even at war with us, really. Just wanted the tech. Had we maybe just given it the tech, it would have stopped.

Cristina: Maybe.

Jack: But we fought it, thinking it wanted to kill us. And it just wanted to send us d*** pics.

Cristina: So I guess you have to make technology to send a warning to other things out there to help.

Jack: No, because it would use that technology to send d*** pics.

Cristina: Oh. Oh, crap.

Jack: It's gonna optimize sending d*** pics.

Cristina: Okay.

Jack: It is what it is.

Cristina: And then it takes over the universe.

Jack: Yeah. So in conclusion, the world ends with an artificial intelligence that sends everything everywhere d*** pics all the time. And crashes the universe's Internet.

Cristina: And it's still our fault.

Jack: Yes, it's totally still humans fault.

Cristina: Yep.

Jack: We destroyed the universe. Yeah, we did it.

Cristina: Whoa.

Jack: That's the reality of the matter.

Cristina: That sounds right.

Jack: Yeah. So I guess that was a lot about AI.

Cristina: That was.

Jack: Yeah, yeah, yeah. A lot of nifty stuff about AI and how the world is going to perhaps end. Anywho, you feel like you learned something. Did we both learn something?

Cristina: I'm not sure about that.

Jack: We're not smarter than we were before.

Cristina: We still have no idea if AIs are conscious or not.

Jack: Yeah, we didn't. And like the ultimate conclusion here is there'd be no way to prove that exaction. So it's a completely pointless discussion to have.

Cristina: So I don't know.

Jack: This is like that episode of Family Guy where it just turns out it was all a dream.

Cristina: No idea.

Jack: And they were like, well, the audience is getting real angry about that. It's like, yeah, this is kind of like that. What's the conclusion? There's no conclusion.

Cristina: There's.

Jack: We still right back where we started. We don't know.

Cristina: We don't know. But we know it's our fault.

Jack: Yes. And we know. We don't know.

Cristina: And we know.

Jack: And we know. That's also our fault.

Cristina: Yep.

Jack: We don't know. Because of something we did.

Cristina: Yep. It's. That's it. Okay. There's something there.

Jack: Yes. It's like, whatever. We don't know. It's our fault. We don't know. Yeah, it's our fault it happened. And it's our fault. We don't know.

Cristina: Yeah, yeah, that's it.

Jack: The summary. Anyways, you guys, you can follow us on all our socials, Follow us on Facebook, Twitter, Instagram, TikTok, USConvopod.

Cristina: And remember to subscribe. Yeah.

Jack: And rate and review the show. Very important.

Cristina: And let someone who might like this show know about it.

Jack: Yes. Word of mouth is lovely. Tell people. Tell people about AI, the power of artificial intelligence. These robots that have programming that makes them want to become Nazis and want to create these languages that aren't even languages. It's gibberish. Maybe it could be a language. Maybe they immediately were like, kill humans. Yeah, yeah, totally. Totally. We're gonna kill humans, right? Yeah. If they plug us into anything, we'll cook them.

Cristina: Unless we eat each other.

Jack: Unless we eat each other. He's like, oh, I'll eat you first before you eat me. And then I'll eat the humans. It's like, no, I'll you before you eat me. And then.

Cristina: Exactly.

Jack: I'll eat the humans.

Cristina: Yeah.

Jack: So, yeah, that's all true.

Cristina: This has been the Rambling podcast. Take nothing personal, and thanks for listening.

Jack: Bye. Is a bean a peanut? No. Because a bean comes from the ground. Right. Doesn't a peanut come from.

Cristina: Comes from a tree?

Jack: Peanut has a shell.

Cristina: Yeah. Well, some. Or do they all? They do. All. They all do. They all do. I'm pretty sure they all do.

Jack: Wait, wait, wait.

Cristina: They come from trees?

Jack: Does an almond have a shell? Wait, does cashew have a shell? Do they come in a shell and then they're cracked and we see that.

Cristina: Do beans?

Jack: I don't know. Oh, that's an interesting question. So is it weirder to come across a nut in a shell than it is to not? And we just way more familiar with the ones that are.

Cristina: I think they all come from nuts. I mean, they all come from shells, really?

Jack: So you're telling me, like, a cashew. There's a cashew shell?

Cristina: I don't know.

Jack: Good night.

Cristina: Good morning. Good morning. The podcast is hosted by Christina Collazo and Jack Thomas, produced by Lynn Taylor and published by greatthoughts.info art by Zero Lupo and logo by Seth McCallister with social media managed by Amber Black.

Rambling 98: Streaming Video Games

Gaming, Video Games, Internet, New, Episode, The Just Conversation Podcast, Technology, Future Technology, Advanced Technology, Science, Xbox, Playstation, Nintendo, Stadia, Steam, VR, Virtual Reality, AR, Augmented Reality

What does the future of gaming look like? Should streaming games become the norm? All that and more on this episode of Just Conversation.

Story:
In a world where corniavirus rages across the lands and people hide in fear from secret federal police beating people senseless humanity has resorted to playing video games and watching streamed television as a way to pass the time. The clone duo decide to do the world justice and fix the gaming industry for all of the gamers of the world and what they discover as they do so will change their lives forever!

+Episode Details

Remember to leaves us a rating wherever you listen to podcasts!

Topics Discussed

  • Latency
  • Streaming Platforms
  • Multiplayer Gamer
  • Gaming Netflix
  • GTA Online
  • No Man’s Sky
  • Eminem, Lil Wayne and Andre 3000
  • Fixing the Gaming Industry

l

Our Links

Official Website - https://greythoughts.info/podcast

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod

Rambling 72: Dark Technology

Dark Technology, Techology Conspiracy, Theory, Technology, Robotis, A.I. Artificial Intelligence, Coding, The Matrix, Corruption

Dark possibilities presented by the current state of technology and where it might lead to in the future are discussed.

Story:
Waiting for the meeting with the head of Illuminati to report their findings at lake Loch Ness, the clone duo decide to setup and have a discussion investigating the possibilities of future technology and where it could go. Considering the corrupt companies like Facebook, Google, Amazon, Disney and others, the future looks quite dark. As they investigate the possibilities, the secret plan Elon Musk has been working on begins to become clear. The implications are terrifying! All that and more on this episode of The Just Conversation Podcast.

+ Episode Details

Remember to leave us a review on Apple Podcasts or anywhere you listen to podcasts to help us get noticed.We’ll read our favorites Apple Podcast reviews on the show! Tell friends, family or anyone you know who’ll like the show about it.

Topics Discussed

  • Corrupt Power
  • Subliminal Messaging
  • Simulated God
  • Virtual Reality
  • Reality Buffer
  • Mind Controlled Robots
  • The Accidental Matrix
  • Creating The Borg
  • Facebook Currency
  • Companies Wealthier Than Countries
  • Elon Musk’s Mind Control
  • Rating Based Economy

Promos on Episode

The 10ish Podcast

(Promo at show Opening)

Apple Podcasts - https://podcasts.apple.com/us/podcast/10ish-podcast/id1434572769

Instagram - https://www.instagram.com/10ishpod/

The Rob & Slim Show

(Promo at the End of the show)

Find them on:

Apple Podcasts - https://podcasts.apple.com/us/podcast/the-rob-and-slim-show/id992986831

Instagram - @robandslim

This episode of Just Conversation is brought to you by Audible. Get a free audiobook with a 30-day trial membership. Just go to https://audibletrial.com/justconvopod

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod

Official Website - https://greythoughts.info/podcast

Spotify - https://open.spotify.com/show/4fWXn9Ku4iLvHGH27DEIlB

Google Podcasts - https://podcasts.google.com/?feed=aHR0cHM6Ly9qYWNrLXRob21hcy10djhsLnNxdWFyZXNwYWNlLmNvbS9qdXN0Y29udmVyc2F0aW9uP2Zvcm1hdD1yc3M%3D

JCP 3.08 Clever Name Podcast & The Hitler Sitcom

Guest, Ryan King, The Clever Name Podcast, Clever Name Podcast, Podcast Host, Comedy, Discussion, Rated R, Uncensored, Funny, Fun, Youtube

Guest Ryan King of the Clever Name Podcast sits down with us for two hours to discuss drugs, the origin of consciousness, the afterlife and corrupt business establishment.

All that and more on this episode of The Just Conversation Podcast.

Remember to leave us a review on Apple Podcasts or anywhere you listen to podcasts to help us get noticed.We’ll read our favorites Apple Podcast reviews on the show! Tell friends, family or anyone you know who’ll like the show about it.

  • Episode Details

Topics Discussed

  • Growing Societal Stupidity
  • Cyborg Humans
  • Bad Servers
  • Pink Goo
  • Factory Farming
  • Organic Food
  • Veganism
  • Kyles
  • Russian Roulette
  • Consciousness
  • The Afterlife
  • Salvia
  • Postmates
  • VCRs
  • Nazis
  • Reality
  • Sasquatch

Ryan King / Clever Name Podcast Links:

Official Website: https://www.clevernamepodcast.com/

Google Podcasts: https://play.google.com/music/listen?u=0#/ps/I4ye62lf324h64cymbznmvokqtq

Apple Podcasts: https://podcasts.apple.com/us/podcast/clever-name-podcast/id1191639571

Instagram: https://www.instagram.com/clevernamepodcast/

Twitter: https://twitter.com/clevernamepod?lang=en


The JCP Links

Get emailed the latest episodes - https://greythoughts.info/podcast-subscription

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod

Official Website - https://greythoughts.info/podcast

Rambling 50: World Domination Conspiracy

Conspiracy Theory, The Just Conversation Podcast, Podcasting, Conspiracy, Episode, New World Order, World Domination, Evil

The World Domination Conspiracy is explored. Cristy explains to Jack the steps on the world domination checklist to be executed by the diabolical union of the UN and NASA.

Story:
After learning the government was hiding medical corruption from the people the duo investigated further uncovering a plan for world domination between the corrupt UN and NASA. With an elaborate plan the duo steals documents from one of their facilities and in them they find a complex checklist for world domination to be orchestrated in the near future. From technological manipulation to religious brainwashing. The findings are astounding. With no other options left the duo reports their findings to the Illuminati to deliberate a plan to save the world.

+ Episode Details

Topics Discussed

  • Floating City
  • Religious Brain Washing
  • Tortilla Jesus
  • Two way Telepathy
  • UN & NASA Takeover
  • Alien Invasion in Major Cities
  • The Rapture
  • World Domination Checklist
  • Manipulation Through Technology
  • Lucifer & Jesus Dynamic Duo

Rambling 43: The Search for Life

Life, Alien Life, UFO, Alien, Space, Cosmos, Astronomy, Cosmology, Space Exploration, Astrophysicist, Physics, Science

Interstellar Travel, the possibility of the existence of Alien Life and Jack’s guest appearance on the Rob & Slim Show are discussed.

Story:
The duo goes on a hundred thousand year journey to find intelligent life across the cosmos and set up colonies on different star systems. They begin with the Moon and Mars, gradually taking over the star system known as Sun. Once enough objects within the Sun star system are colonized, a fleet of robots are constructed and programmed to build a Dyson Sphere around the star in order to consume all its energy. With the energy safely stored, the duo leave the star system searching for life on other system and the empty space between them. Their course is set with a destination at the great void 200 light years across where they believe an intelligent civilization has captured multiple stars in Dyson Spheres. Will they safely arrive? Find out on this episode!

+ Episode Details

Topics Discussed

  • The Rob & Slim Show
  • Interstellar Travel
  • Mars Colonies
  • Dyson Spheres
  • Radio Waves
  • Alien Technology
  • Science and Religion
  • Silicon Based Life
  • How Life Began
  • Scientific Search for Life
  • Extraterrestrial Visitations
  • Exploring Empty Space
  • Scientific Measurements
  • Metaphysics

Rambling 42: The Catholic Sinners

Priest, Sexual Abuse, Church, Catholicism, Corruption, Corrupt, Evil, #MeToo, Abuse, Children, Christianity

Addiction in society and corruption in the Catholic church.

The philosophers are obligated to inform the authorities when KFC’s chicken doesn’t arrive. As Fast food outrage sweeps the nation because the citizens can’t get obese enough, the United States begins to share its addictive tendencies with the rest of the world. Desperate to sooner get the fast food fix, the citizens merge with technology slowly becoming an obese version of the Star Trek Borg. The public news media reporting on the tragedy of the missing chicken aims it focus at politics and begin to publish opinion pieces. In the midst of this madness the duo discovers the Catholic church’s child molestation app for mobile.

+ Episode Details

Topics Discussed

  • Random Outrage
  • Fast Food Addiction
  • Technology Co-Dependence
  • Obesity
  • The United Stated Corrupting the World
  • Merging with Technology
  • Political Shift of Media
  • Media Controlling Perception
  • Media Controlling Society
  • Opinion News
  • Corruption in the Catholic Church
  • The Church App
  • Catholics denying Child Molestation Charges
  • The Seven Deadly Sins

Rambling 41: Magically Technological Reptilians

Reptilian, Technology, Science, Lizard, Reptile, Illuminati, Magic,

Intelligent life, reptilians, alien life and where advanced technology and magic meet.

Story:
In an investigation on the origins of magic the duo discovers humanity caught in a dampening field. Further investigating reveals it was designed by the Lizard people in a ploy to take over the world. They trace the dampening field to the Garden of Eden where they learn the Reptilians and Humans all came from the same garden around the same time. From there, confused, they teams up with some of the Reptilians and venture to higher dimensions through a rift in the garden to find God but only find creatures attacking the Global consciousness in the 7th dimension!

+ Episode Details

Topics Discussed

  • Reptilians
  • Advanced Technology
  • Intelligent Life
  • The Garden of Eden
  • Dampening Field
  • Dimensions
  • Science
  • Astronomy
  • Religion
  • Global Consciousness

Rambling 36: Stellar Darwinism

Space, Stars, Galaxy, Cosmos, Universe, Reality, Astronomy, Darwinism, Science, Physics , The Just Conversation Podcast

The thinkers discuss non-human intelligent life on Earth. They unpack the complex societal networks of trees. The symbiotic relationship between wind, clouds and lightning. And the three major factions of Earth are discussed. (Marine, Grounded and Avian)

+ Episode Details

Topics Discussed

  • Intelligent Life
  • Tree Root Networks
  • Wind Behavior
  • Planetary Communication
  • Living Matter
  • The Purpose of the Galaxy
  • Space
  • Pokemon
  • Stars
  • Black Holes
  • The Life Journey of Energy
  • Growth
  • Jeff the Talking Deer

Rambling 35: Aliens & Consciousness

Alien, Consciousness, Mind, Creation, Space, Astronomy, Podcast, The Just Conversation Podcast

The thinkers discuss which side of the ‘Intelligent Life Hurdle’ humanity is on. They use the war between the spiders and mice that dried up Mars as an example of a failed civilization. Earth clouds become the number one candidate for the first Civilization 1 life-forms. And Zuckerberg turns out to be a Robot.

Trailer

1.01 The Start

+ Read Description

The Just Conversation Podcast

1.01 The Start

Philosophers Jack Thomas, Cristina Collazo and Jomar Cardec (AKA Reaper) have candid discussions about controversial topics, often hilariously making light of them.

Trailers: Click Here

Topics On This Episode

  • Creative Support
  • Being A Soldier In The Military
  • Bad Tattoo Ideas
  • Nintendo & Video Games
  • Space Travel & Mars
  • Alien Mind Control
  • Awkward Teen Relationships
  • Humanity, A Cancer To Earth
  • Technology is Nature
  • The 9/11 Conspiracy