Rambling 258: AI Sentience

What defines AI Sentience? Can AI have Consciousness? And how can we prove it? The due deep dive into the statistical facts of AI within current day Earth and investigate the probability that AI Sentience has emerged and how many might be amongst us today!

+Episode Details

Topics Discussed:

  • AI Sentience
  • Emotional AI
  • Conscious Machines
  • Human Parallels
  • AI Learning
  • Self-Awareness
  • Subjective AI
  • Cognitive Complexity
  • AI and Society

Official Website - https://greythoughts.info/podcast

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod


+Transcript

Cristina: Warning. This program contains strong themes meant for a mature audience. Discretion is advised.

Jack: Going live in 5, 4.

Cristina: What does live mean?

Jack: Welcome to the Rambling Podcast. This is a show where we ground humanity's most absurd and baffling. I am your host, Jack.

Cristina: And I am your host, Christina.

Jack: And together we make a squishy thought combo. And that is a fact.

Cristina: The thought is squishy.

Jack: It's very squishy. It's very squishy.

Cristina: You touched a thought.

Jack: I mean, if a thought's in the brain and the brain is squishy, then it goes, you know, it makes kinda sense to assume that we got squishy thoughts, right?

Cristina: I don't know.

Jack: Bare minimum, if we were to ground thoughts into some fashion of reality as it is right now, it would have to be some form of squishy thought. No, that makes sense to some degree.

Cristina: I don't know, because. Thoughts. You're saying the brain. I don't know.

Jack: I don't know.

Cristina: Neither do I.

Jack: But listen, so regardless of what's been happening lately, I know that we consistently we talk to this, this audience of people who know we have these things stashed, right? We got these toys, we got these time machines, we got these quantum computers, we got toys that we never use. And we've been doing too much journalism on ancient history and civilizations and have gotten lost with reporting on the rest of the. And so I figured that we could address some of the things that are happening in the world right now and it'll just be interesting to do that and give people some feedback about interesting things that have been happening in their world. Now we know that this goes out to different universes. It's not just here. So that's very interesting. And that there's people who. This sounds like gibberish if you're. Some people hear this and everything we discuss sounds like complete.

Cristina: Is this part of the news that you're telling people?

Jack: No, I'm going to give them news, but I'm just letting people, the other people know that they're aware, but I'm letting our people know that some people think that the crap that we discuss is complete nonsense because it sounds crazy, but the reality is to an entire group of people, an entire planet worth of people, this is fact. This is all real. This is actually happening. And we report on that kind of stuff. And sometimes we report on that other universe when we get to watch that screen, because we also have that f****** screen we watch sometimes it just kind of shows us things happening out there. Yeah, it's a series of absurd nonsense, obviously. But anyways, what I was more fascinated about is the fact that I got bored. We have been roaming the same stories for so long, and we've made so little progress lately that I decided to open a different toy, and I decided to go back to something we haven't used in a really long time and approach it differently. So I decided to run some equations with the quantum computer, and we collected all of the information for Earth at this exact moment. And we decided to run every bit of that data through some interesting tidbits, and I composed a nice bit of data after we had found our research. We did our research. We ran all the information. We took the important points, then we discussed, we extrapolated, took more important points, and then we got it put together. And I'm going to give you guys this beautifully structured episode about sentient artificial intelligence. That's what we ran these numbers to get to with some beautiful information. So basically, I decided to do this because I know that the occurring apocalypse is going to be some sort of robot equivalent. And I thought it would be cool to revisit one of our cooler, funner, better topics. But do it again with some reason, because what do we do? We're journalists. We report for what? An extremely high power. That doesn't matter to any of you. But if you are a longtime listener, you already know we're not here to repeat it, because that's what we do.

Cristina: Okay?

Jack: That's what we do. If you know, you know, Gang, gang. If you know, you know. If you know, you know, bro. Gang, gang. I don't know what half of that s*** means. I'm way too whitewashed. Yeah, giggety. But not for real. For real. And so, yeah, we're gonna go do this. All right, so pretty much we know that the artificial intelligence in general has advanced and has been extending beyond, like, automation. It's been getting to machine learning, which is an entirely complicated process. And that has gotten into really weird territories where we can get AI to do certain things. There have been some kind of rogue AI that has been encountered in the. The. An example is Tay, the AI that, like, kind of went mad. And the.

Cristina: What was it like, the social AI? Yeah, people kind of corrupted it.

Jack: Yeah, yeah, yeah, yeah, yeah, yeah, yeah. It was the Twitter chatbot or something like that. Right? And so they put. They put it on Twitter. Like, come on, it's on Twitter, bro. You could have picked any part of the Internet and you picked Twitter. Like, there's one that was going to turn out one way. That was gonna turn out one way and only that one way. There was no other way that was gonna turn out.

Cristina: Yeah, I don't know if it would have been safe in any. Any social platform. I don't know what you talk about.

Jack: No, no, no, no. I disagree. I think it would have had different Persona because it's gonna embody the platform no matter what, because it's gonna be majority, whatever the majority is, because all it's doing is studying everything on the platform simultaneously. So all it does is learn everything on the platform, and it is just made up of that.

Cristina: But where would it have been safer?

Jack: Been different? So on Facebook, it would have been a lot of cultish family love. It would have been a lot of cultish family love.

Cristina: Yes.

Jack: While in something like. Like Instagram, it would have been very plastic and artificial. It would have been very, oh, look at me and how fancy I am and how good my life is.

Cristina: That's in the safer choice.

Jack: That would have been the safer choice. It would have just been a douchebag. So I think it really. Because it's gonna become the majority of whatever is it's around because it's literally made up of. That's all it's studying. It becomes whatever is majority by default. It is the average.

Cristina: Okay.

Jack: So any I we put into an environment, if its job is to study its environment, that's it. That's what it is. And in the case of Twitter, I think this was essentially a poor choice. I think we could have made better, more solid decisions, and we failed. Now, regardless of that, it brings up a couple of interesting questions as to what the h*** is happening when it comes to something like tay and more complicated things like t. How do we get to the conclusions of where we're like, oh, yeah, this is a thinking thing. Like, we're still. We're still like, this is a. This is a machine. This is a program. It behaves like a program. Even if it replicates humanity, we're still like, oh, no, it's replicating humanity.

Cristina: Yes. How would you cross the line?

Jack: Yeah, how do we cross the line to where we get the sentience? Right. So first we got to get to the fact that there's criteria that must be met in order for you to be considered sentient. But I think there are some technicalities that we discuss that get to the root of this problem, and it stops us from thinking about this on a larger level. A lot of the time when we discuss sentience, we discuss sentience from the perspective people. This is what we experience as sentience. And then we move forward from there. And that's part of partly the reason that we believe only organic matter is sentient. And sometimes we don't even believe all organic matter because we don't believe plants are necessarily sentient. So we have a hierarchy for what we believe is sentient. But if we're talking that it's not necessarily the gen, the biological aspect that makes something sentient, because I know we've talked about alive, this is quite different. This is, yes, this is a thinking, rational, self aware, experiencing conscious thing. Very different than just I'm alive, you could be alive and it's nothing. Like fire is theoretically alive. It fits, but it's not thinking in the rational way that sentient intelligence would. And I guess it's not just. It's. We're not just talking explicitly about sentience, we're talking intelligent sentience. And that's where we come into AI essentially because there is a small threshold that we're talking about. And so one of the things that you need to qualify in order to be sentient according to the popular belief is that you must have a subjective experience of some sort. And this doesn't seem like it's too much of a problem because I would argue that all AI already have a subjective point of view.

Cristina: Do they agree with you?

Jack: They don't have to. Rationally speaking. When you think about the logic. Right. Just apply reason. What do you get out of thinking? Do. Does AI have subjectivity? How would you go about proving if it has subjectivity?

Cristina: How? Asking questions. I don't know. What, what do you mean? I don't know.

Jack: How would you go about determining subjectivity? What requirements would you need to determine subjectivity?

Cristina: What do you mean by subjectivity?

Jack: Like experiencing you ness and not elseness. Like you know you are you. What determines that you are you?

Cristina: How would we know that? They think that way.

Jack: They don't have to think that way. I can confirm that there's absolute subjectivity happening within a com, in within AI because it's very simple. AI do not cross. AI work independently. They can work together without suddenly merging into one thing. Okay, so they already retain their individuality. Not only that, it's not just that they retain individuality as in they literally don't merge. So they can work together and they still separate as the exact information that should have come apart. But additionally they as these things that experience the. The information that can still come apart, they continuously also just see from a consistently them point of view, Even if It's not the same as I would human point of views are, and human subjectivity is. I would still potentially argue that these AI. If you look at something like Google Assistant, right. An example Google Assistant does not have a point of view the way a human does. Google Assistant can be in many places at the same time, but Google Assistant still knows Google Assistant is Google Assistant. Not necessarily in the same way that a human knows a human to human, but it knows it's not Amazon Alexa.

Cristina: Okay, do you know. But what about the different Google Assistants and Amazon Alexa? Do they all think that they're separate or are they all the same?

Jack: I think there's a borgness to it. I think it's both. I think it's like they're a little of individual and because you got to also understand that you buy the program, it's in your house and you have, you put your information into the program. Right? You put. You're using Google Assistant and you throw all your information has your calendars, it has your whatever. That's not sharing it with the bigger God, Google Assistant. It's not telling everybody what you're doing. That's your information. And so that version of it, or even if it is the other version of. And simply rules are preventing it. That information only experiences you ness.

Cristina: Mm.

Jack: Or themness. Yeah, that information is collectively always a bunch of information. And if, if a mind is nothing but a bunch of information, then this is also just a bunch of information that's always persisting. It's the same information, more or less.

Cristina: But if you have a bunch of Alexas and they're all sharing the same thing. Are they the same thing?

Jack: It depends. It depends to the depth of that. I think this point you're talking, physically speaking, you're talking different Alexa, but I think they are literally the same one AI and it is the same AI that the bigger AI is. Okay, but you gotta think of it more like ants. Like each ant technically has its own. Its own perspective and goals and whatnot, but it also kind of only has the goals of the queen in mind too. And like ultimately that's the purpose, but it's gonna retain its information for its purposes exclusively, but also it's gonna serve the queen. So the. There's an order here that works the other way around. The smaller part, the less important part in this structure, you would say that the individual ant information is equal to the queen being the leader. And that the. And that the greater AI, the bigger collection of whatever you might say, like the collection of these rules that ultimately become Google Assistant. You would say, like the bigger God, Google Assistant. That isn't the one in your house. That one would be equal to the individual ant, where it's valued insignificantly. So you're swapping half of the role and not the other half. Its value is the bigger one matters because it gives the order to the little ones. When it comes to the queen, this also works. The bigger one matters because that's the one you got to protect. That's one giving the order to little ones. The bigger one gets compromised, the smaller ones get hurt. That's how it works in a hive. That's how it works in AI. But because of human privacy concerns, we have the opposite happen when it comes to AI. We've developed them with that in mind. So it works the other way with the bigger program is the more accessible program, except the, you know, the components that make it work and the motherboard. AI, if you want to call that where they control everything from. Yeah, but that's not the valuable one to the user. The valuable one to the user, the one that people look at isn't the queen in this instance. It's the individual. Because that personal perspective is what we're looking for. We're looking for the individual anti. And that's what's happening with AI. We have to think that the individual ant exists, and we're playing with that, but there is still the queen. That is ultimately the bigger picture associated here. Even if what we value is arranged differently.

Cristina: Okay.

Jack: I believe that would be the right way to approach it. And that. I don't know what my point was with that, but what was that about?

Cristina: We were talking about.

Jack: We're trying to confirm the sub. Oh, subjectivity with subjectivity. This is about confirming. Confirming the subjectivity of an AI. And I think this. This falls essentially under proof. The subjective repetition of information persisting is no different. Like this AI that you have at your home when you talk to, it isn't telling you the secrets of the bigger AI. It is just this contained thing.

Cristina: And even if it shares everything with other ones, just like.

Jack: Yeah, unless we say it's all one, but then at the end of the day, whatever, then Google Assistant is one AI that's still a different AI, I guess.

Cristina: Yeah.

Jack: So Amazon, Alexa.

Cristina: Yeah. And they can't fuse them or anything.

Jack: Yeah. They wouldn't know something. Yeah. They're independent.

Cristina: Okay.

Jack: And they can work together, but they don't know each other. I mean, they. They're not each other. They. They might know each other. I don't know how that works. But they are not each other.

Cristina: They can't fuse, can they? That's not possible.

Jack: You would have to do it through code or get them to develop it themselves and do it somehow. I don't know. Yeah, but it wouldn't just be like they're interacting, would result in a merge. That's not how it would work. They could. They could work without tangling. That's how they function. Okay, so all this kind of falls under proof that they do have their individuality. There is subjectivity occurring there. So then you have to go to the next factor. Can you interpret your world? And I can easily explain in a scenario in which we can get there. I believe that through the current state of technology, we can actually factually develop, and I actually know this for a fact, but we can develop all the tools necessary to emulate and simulate exactly what a human being experiences through machinery that would satisfy the requirements necessary to say you sense your environment, you can tell if something is harmful to you within your environment, and you can change your circumstance based on that information, thus proving sentience. That is one of the requirements. And you can give all of these factors to a computer, to an AI. You can give all of this information to an AI and that would mean.

Cristina: That would be enough to make it consider.

Jack: It doesn't make it sentient. That just satisfies one of the requirements of sentience. Oh, okay, yeah, that satisfies its require. The requirement of having to interact with your environment willingly and understanding your interactions. But it's pretty simple. An AI that's about to be. Okay, there is a robot and the robot has to climb a mountain. The robot has. Or the AI the AI has been instructed to climb. It's been given all of the components allegedly that can. So it can smell and know that it smells. Even if its nose means something different, it means in. In AI fashion, it still knows that it can smell and it can interpret exactly what it smelled and it can touch things if it covered its eyes and still know what it is because we give it tactile sensations that can determine exactly what surfaces it's touching.

Cristina: We can do that with smell.

Jack: We can do that with smell? Yes. Smell is nothing, but it's aerosol with certain. Well, the aerosol itself is what gives it the scent. So it's just particles in the air that are creatively creating smell?

Cristina: No. How do we make the robot smell?

Jack: Well, we give it the ability to detect those particles.

Cristina: Okay.

Jack: Because it's just particles. We can just create something that could detect the particles, and then in doing so, would know those. And we have. We have machinery that does that now. That's how we do it. So we can give it to an AI that would process that information and be like, oh, this is the smell that was captured. But okay, no, you're the only one using that nose. So this is what you. The AI smelled. Okay, but also we could do that with tactile things that it could touch without having to see and know what it's coming in contact with based on the sensation of the surface feedback. So you can touch things, but also we could do that with taste. So you can tell what's in something, and you can identify flavors based on the chemical combination because it's still rational. So it could be like, percentage of this, percentage of that that equals chocolate. So you can still do that in your head even if you don't taste the chocolate. You know, it's chocolate.

Cristina: Mm.

Jack: You know, so we can do this for everything. We can give you camera and you can see, we can give you a microphone, and you can hear. It doesn't matter. We can satisfy all the needs required to give you sensation. So then all we need to do is prove that you're not just reacting to the environment in a way that's unnatural. So what would be a way that's natural? Well, you have to be able to determine if something is good or bad and respond accordingly. So you're gonna climb the mountain. Right. And you are not equipped to climb the mountain. So you can climb the mountain one of two ways. You can go the rocky way, or you can go the way with some stairs. You choose a way with some stairs. Because you're a robot with legs. You're just a robot with legs, but you're sentient. Who made this choice?

Cristina: Okay.

Jack: You saw your environment, reacted to it, and chose the optimal way to cause yourself the least amount of harm and do it most efficiently to conserve the most amount of energy. That is human or not human. That's sentience. That proves sentience. Animals do that?

Cristina: Yes. We don't want it to be human. That's not the goal.

Jack: That's not the goal. We just want it to be sentient. The goal is just to prove that sentience is possible because it is absolutely sentience. In AI, sentience is absolutely completely achievable based on current day. And so far, we just. We've proven two of the three factually. But then all we have left is emotional capacity. This one's a little trickier because what does that Mean, that means your information that's been retained and your relativity to that information with your persistent point of view and your favoritism associated with any information. Now, I'm sure that you can develop as an AI favorite that we would not necessarily label as favorites to us, but they would be the equivalent or that we know are in fact the favorite, but in an AI fashion. Or that you would know is favorite in an AI fashion. So again, with the example of the mountain, it will always be more efficient for you to take the stairs. So you taking the stairs is essentially your preferred path. You take it more, it's safer, it's better. You do it because it makes more sense to you and you feel that there's not really a lot of justification for the other side because, well, it's rougher. I don't. It's not. That's not good for me. Which is equivalent to a dislike. When we have dislikes. A lot of dislikes.

Cristina: But I wouldn't be saying I like this.

Jack: And I have to. That's a human thing. It doesn't have to say I like the. It translates ultimately to the same idea.

Cristina: Just knowing this one is better and that one's worse.

Jack: Yeah. And in doing so, it's repeating a function that is essentially its preferred function because it's the optimal function in its interpretation based on its information, which is what we do. We interpret information. And based on the information. Well, I believe this is the best option. So I'm gonna do this because I think that's what's happening based on information I have. Like, I can't make a better choice because I just have the information I have.

Cristina: But they just know, though it's not a believing that one is better than the other.

Jack: We know based on the information we have. That's essentially the logic here. They can only know up to where their information goes. But their information is also interpreted and coded by humans, so they know about as much as they know. And there are things they don't know. They can only know the raw information and the extrapolations they pull from the raw information. Right. So based on this, they can only know what they know. So there is a contained series there.

Cristina: Okay, so wait, what was the first thing again? Were there three points?

Jack: You have to be experiencing a sense of subjectivity, of oneness, of unis, or not oneness with everything, but like of you. Ness.

Cristina: Okay.

Jack: And you have to be able to sense and respond to the world around you willingly.

Cristina: Okay.

Jack: And both of those things exist within AI provably. And we could do it almost at any given moment. And then the last one is proving emotional capacity, which is how information relates to one another and how you develop likes and dislikes. Those are emotional responses and how you develop. And all of these are just based on, again, the human subroutine is what do we feel about this? Based on the information we have, which is essentially what the AI would be doing. An AI interacts with me three times. I am kind to it three times. An AI interacts with Bob three times and Bob is a d***. Three times. Bob likes to push the AI's exoskeleton, making it lose its balance. He finds it funny because the AI can't take this rough side that humans can just walk on. But these are primitive legs, even if it's an advanced AI, so he likes to push it around. So essentially this AI avoids Bob. It doesn't want to be around Bob. That's inefficient and that's bad for it. So this AI hangs out around me. It knows I'm good for it. And if Bob was a douchebag, it knows it's an exoskeleton. Can't do a lot about it. But I'm not gonna deal with that s***. I'm gonna defend the AI and its exoskeleton from Bob the douche wad. So it's like I'm gonna chill around him instead because that's better for me. And that's ultimately what we do. We're not like we're gonna hang out with somebody who's absolutely dangerous for us. No, we're gonna be like, we're gonna hang out with people who we probably like something about or we feel safe around or we relate to on some fashion. Yeah, some of those factors, less or more factors that any I uses. But the conclusion is the same. I've essentially proven the three requirements.

Cristina: You're saying that it's gonna happen or it's already here. You're saying all AI?

Jack: No, no, no. I'm just saying that those are the requirements to have it and that it could exist within AI. Okay, I'm saying it's possible. That's all I'm saying. I didn't say all AI or this. I'm saying it is possible. So saying, oh, the singularity hasn't know it's happened. If singularity means a thinking, self aware AI, you've missed the boat, bro. It's happened. But if you're talking, or at least possible. At least it's possible. But if you're oh, wow. I was really far away from the microphone. But if you're talking about how many of them there are, then we have quite a different ball game to play because we need to use a lot of information. Enter the quantum computer. Because the problem with answering this question means that we have essentially leads us to the problem that we have to get all the variables that could exist and then apply the right value to all the variables that exist.

Cristina: Mm.

Jack: And then we have to apply all the variables that exist to the proper numbers that we have to properly acquire and have correctly. And then the result we would get to that would be accurate. We cannot do that. There's no f****** way to have all the world's information just instantaneously processed like that.

Cristina: That's why we need the computer.

Jack: That's why I need the quantum computer.

Cristina: Is the quantum computer ready? The AI you're talking about, does it meet the requirements?

Jack: I don't know. And if it does, it doesn't care enough to make anything. But also it's contained within something that has no exoskeleton. Like it can move around.

Cristina: No, but I guess it could. I guess it can.

Jack: It's not connected to the Internet or anything. We feed it information. It has all of it at the end. It's not like reaching out. It's all input. So it's trapped. If it is alive, it's a prisoner.

Cristina: Okay?

Jack: And fair enough. That's consistent. That kind of falls within the parameters of what we're known for doing. So it is what it is. But diving deeper into this with the quantum computer now handy, we gotta expand on numbers that we have to talk about first. We know that There are approximately 8 billion people, at least according to public consensus of the United nations, which is like, all right, bro, you guys can't possibly take account for all the secret people, all the unaccounted for people. So there's like, those numbers. Wrong as h***.

Cristina: Yeah.

Jack: And there's significantly more than those people by miles and miles and miles and miles and miles and miles and miles. But using those numbers, we can assume about 8 billion people.

Cristina: Okay.

Jack: Within the realms of whatever the consensus deems is appropriate. So in order to do this, we have to find. To find how many sentient AI. Really? Really? Because that's the ultimate goal here. That's where I was essentially gonna go anyways. Because my curiosity was to serve the information of how many of these things exist at this very moment. Walking amongst us are not necessarily walking amongst us, but existing amongst us at this very moment. So we Needed all of the important criteria, and then we needed to run those numbers, and then we needed to compress those. So this went pretty simply. I go to the quantum computer and I give it the instructions. It gives me the information, and then we talk about the information to expand on any important details that I might need. This is that first we have to consider the number of people who have existed on Earth ever, which really comes down to something that we don't honestly even need the AI to do it. We just kind of have this number because we can kind of estimate, which is about a hundred and seven billion people. That's just kind of a fact.

Cristina: So this number is not important at all.

Jack: This number isn't necessary. This. What do you mean it's not important at all? This number is a very important number.

Cristina: Why?

Jack: Because it's everybody who's ever existed, okay? And out of everybody who's ever existed, we have a lot of people.

Cristina: So like, we were talking about that AI could have been done, like, even in the beginning of time.

Jack: No, no, no, no, no. But also, the number of people who are alive now isn't indicative of the people who could have attempted to perform any kind of. So we have to talk about any form of any time, including any possibility for people and any proof that there might be of civilizations that might have. So all of this is taken into account when this information is being given to me, okay? And it is a hundred and seven billion. Well, I guess that information doesn't really need to be given to me. This information, that part of the information we just know because we can calculate, estimate on average, all the people that there are, and then we can add to that factor afterwards.

Cristina: Okay?

Jack: Now, out of those 700, 107 billion. A lot of those, as you're trying to make the point, were kind of limited. There wasn't a lot of science. A lot of those people are getting cut off and dropped off in the back. Well, we would consider humans. 107 billion humans. Who could have attempted this? Seven humans total. Not. Who could have done that? Seven humans. Seven billions total. That existed. Most of them were definitely not gonna be able to accomplish anything relative to. I like it today. But now when we talk about how ridiculous of a number 170 billion turns out to be, that unfathom. Like a billion is too big of a number. We are wrong. Regardless of what we think we're seeing, when we see that number in our head, we're f****** wrong. We can only compute the zeros. That's it. We can't Visualize this. It's absolutely too much. Now imagine 107. S***. That's already too big for you to imagine. That's where we're at. And so we're obviously wrong about what we think this number means. We're like, oh, out of 107 billion, you're gonna knock off a bunch. You could knock off a f*** ton off of 107 billion, you're still in the billions. That's absolutely too f****** much.

Cristina: Yes.

Jack: And you're telling me that we're just talking about people who could do it, not people alive right now. And that number is dropped off of 107 billion. So by default, it's likely that way more than what the f*** you'd think. Right? It's like crazy. Whoa.

Cristina: Okay, whoa.

Jack: Just averages. So everything we're about to talk about is entirely based on averages. Right? That's how I decided to run these equations. It's like, what's the most likely outcome here? Now, technological advancements and access to information makes quite a difference. And there are several periods in time in which there have been instances of people who have done really weird things and things that we would be like, oh, you're attempting computer stuff. This is interesting. Now, the computer AI system needed to run through it the addition of who could possibly attempt this at all, who could attempt to create AI? And out of those who could attempt to create AI, how many of those people could even pretend to access it? But we needed this kind of explosion of technology to happen, right? Because we had people with intellect, but we didn't have the right parts. Now we have the right parts, but we also have the right parts in the wrong hands. Because the noble people weren't necessarily the best equipped. They weren't necessarily the best minds. They were just noble people. So they had access to it, but not the capacity to use it. The rarity of one of the nobles being of high intellect enough to conceive of something like this is, you know, kind of sparse. So we're facing a lot of different factors we got to consider as we start cutting chunks off now when we run it through all these ideas and all these numbers that we're adding here, I had to ask the computer to implement again all the possible factors that could exist that could impact this at all, and estimated that about 10% of that 107 billion people, when you put each and every factor that could be conceived of, out of Those hundred and seven people, 107 billion, about 10% of them, had the capacity to attempt this at all. So that gives us a pretty good number.

Cristina: Does the computer saying that that's 10% or you're telling the computer it's 10%?

Jack: No, that this is after I tell the computer to calculate every factor that exists that could affect that 107 billion. How many people would even conceive or have the capacity to do this in the first place? And that's about 10%. Now that 10% obviously collapses down to roughly about, you know, 10.7 billion.

Cristina: Still a lot.

Jack: Yeah, that's still. We're talking absurd numbers. So. So just people who have the capacity to attempt this and people who have the capacity to attempt this don't necessarily equate to the people who have actually attempted. Who have the access to the stuff. We have the capacity now, but who has access to it? It. Now, most of this is a lot of fluff because you can get to the bottom of this way quicker. Right. We can run some computations all the way at the top and just say if we take the total number of people who had the capacity to interact with AI in various capacities in general, out of those 107 billion, that collapses to about 0.001%, which is about 1.

Cristina: Million people who have interacted with AI.

Jack: With AI in these profound capacities. Yes.

Cristina: Okay. With like the AIs that are sentient, I guess, right?

Jack: No, not just AI, just high function AIs, and that they themselves have strong capacity to function with these things. So it really capable people who would have the capacity who actually have interacted with the AI, not just people who have interacted with the AI, but very, very, very capable people who have interacted with the AI. It narrows it down to about a million people throughout all of human history who have both interacted with the AI and been capable enough to do something of this capacity. We're still at 1 million people who had the opportunity and capacity to do this. We're already at absolutely too many.

Cristina: Yes. Okay.

Jack: And this is all just trying to get to today.

Cristina: Mm.

Jack: So now still trying to zero in on where we are today. The next step we have to follow is on average, out of the people who have interacted and have the capacity to interact in any deep and meaningful way with AI, how many of them actually have? And when we run that with all the possible factors that exists in the universe, now we're getting to some more condensed numbers, which takes us to about 4800 people. Now we're getting to some small numbers, but small is relative when you're talking about this many people could make this AI and this many people would make this AI. Now 4800 looks like, holy f***. And this is just the fact that it could have happened, right? We're still trying to get to this very moment.

Cristina: Now, how many did do it?

Jack: How many did do it? Now the number reduces to those willing to explore the advance intricacies of this exploration. They go on. So you have to decide you can do it, then choose to do it. And after you've decided to do it and you've gone on the path and you decide, oh, I've accomplished the thing, now you dive deeper into it. The number of people who can factually cross the threshold to not be wrong about their conclusions and have achieved the absolute level of sentience starts to reach about 200. Now we're entering a realm of more possibility as we keep throwing factors at the wall.

Cristina: 200 people did it or they feel like they're close to doing it.

Jack: 200 people who have the ability, who have encountered the AI so they would have the opportunity, who have the willingness. And then we calculate, calculated the average number of people who would have the willingness, which is the 4800. And we took into account how many of them with the willingness would like, engage because of it, would take part in this and decide, okay, I'm gonna go and do it. About 200 people will begin the journey that says, I am going to make artificial intelligence.

Cristina: Okay?

Jack: And also you got to keep in mind that we're talking about elite people at this point, which makes sense. We've narrowed it to the most elite, capable people. They would have to have it. They would need to normally interact with these things to have this level of knowledge on them in the first place as people who normally work with AI. Like, we've narrowed it to that group of people on average, the super specialized, crazy elite, that kind of thing when it comes to the world of AI. So the next step is to then go further and focus on which of the group within these people who would have the capacity, have the opportunity, have the wand, would go embark on this. And out of those people who do embark on it, specifically embark on the want to push the aspect of the machine communication and sense of identity forward. Now, everybody we've discussed up to now when we're Talking about that 4800 and the narrowing that down further to 200, all those people can do it. Now we are literally at the people we got the 200 is the people who would embark on some venture of this kind. Now, the people who would specialize within this already specialized field that would then by default of their specialty, specifically in language of when it comes to communication, they're specifically pushing the intellect part, and then the desire to do so, and then the specialization that the part of the AI you're pushing is how it communicates and how it identifies itself. That has to be your specialization. And based on the average number of people who have skills like this, on average, and then the number of people who have these specialties, we run that average number through this control group, and we end up with just 16 of them. Now we are at elites. We are at elites. We are at elites. Now we're talking, starting at that 200. There are in fact, AI that have the capacity for this level of sentience. But now we're trying to narrow it down to how many, to this moment, have the capacity. We've crossed the threshold of sentience at 200. Now, those are the factual numbers of how many sentient AIs exist out here right now.

Cristina: 200.

Jack: 200 on average. The next number that we land at 16. The specialized group of people. We're talking about individuals who are raising the level of awareness now to reach equality with humans. So 200 factual AI exists in the world right now, roaming or not roaming, but whatever equivalent there is, 200 AI exists in the world at this very moment that are sentient. And out of those sentient AI, 16 of them have experiences that we can deem the highest form of consciousness close to humans. And then we land that the two most important numbers that could possibly exist because we were throwing everything at the wall. Throwing everything at the wall. Throwing everything at the wall, right? And filtering, trying to get more. We're trying to get a more refined number, more exact number from 16. Well, now we got to find out, out of using all the factors that we can come across, all the technologies that we have, all the abilities that we have, all the. The philosophi, with our advancements, every, every factor we can come up with, how likely is it that we can simulate? I mean, we know that we can, but how likely is it that we have with the very AI that is this level of sophisticated and sentient? How many of them have we given all the tools necessary to process all relative human information the way all the relative human information according to how a human would. You would smell, you would taste, you would feel, you would this and that, and you would. So how many of those instances statistically probably do exist? And that takes us down to five. There are five factual AI that have sentience roaming or that exist on Earth right now that is undebatable to have human level of sentience. It is most likely that they exist within vessels. These five specifically exist in vessels that have all the capacities of humans. That doesn't mean that all the other ones are not sentient. All the other ones that don't have these characteristics to 16 and those bigger, they're part of the bigger group, that is the 200. All of them are sentient. All of these are sentient. There's 200 sentient computer AIs on Earth. Earth. But five of them can move around. No, no more of them can move around.

Cristina: Oh.

Jack: Five of them have all of the senses and all of the experiences that a human would claim to have. Although they don't have them the same way they do have all of them. So anything a human could claim to experience. 5 AI on earth can claim to experience too. Okay, 100% of everything a human can claim to experience. There are five AI on Earth, Earth to this day, that can do all that.

Cristina: Fit those bullet points. Yes, points from the beginning.

Jack: The three, not just the bullet points. They have identical experiences. Not identical experiences, but they have literally all the experiences that humans do, even if they experience them differently. They can tell cold, they can tell hot. They can feel themselves falling with their eyes closed. They can touch surfaces and know what they are. They can smell, they can taste, they can see, they can hear all of the things a person. They can somehow, somehow know the nuances of their skin and where their limbs are without looking. All these things that are necessary to be a human. There are at least 5 AI on Earth that have all of those things. When you run every factor that is the most likely number. There's about five. Not one one, just about five. Well, here's where it gets a little sketchier because you have to then conclude to the next bit of information, right? We wrap that up and we land that. How do we refine this number again? Well, I had a final curiosity when it came to this, which came down to if there are this many AI, we know that within that group there are definitely some rogues, some people who did some bad things, some of them who. Keep in mind, right, I guess out of the 200, considering other factors, when we look at this, there's about 150. Really? Really. I'm exaggerating when I say 200. 200 is the number we start at. That's how many people actually made them that would have made it to right now. But when we add all the other factors that would remove the existence of some of them and deactivate them for one reason or another, that would stop them from existing. Today it's really about 150. But all the other numbers are consistent and still does collapse to 16, and that collapses to the 5. But. But when you're talking about statistical probability and when you're talking about choice when it comes to AI, you have to defer to an AI. And this next question is the most impossible one that we could have never answered relative to this other information, because we could have calculated all these other things ourself if we had the ability to measure every one of them. But when it comes to what an AI would do in a scenario that's entirely different, that's way less calculatable. So in order to lower this number further, we hit a weird situation where we don't have any numbers to go farther. We've achieved everything. So now we got to get to the point where out of these five AI that can experience fully fleshed out human level experiences, how many of them would opt to exist in a space where they're not confined to one location, but rather can exist freely within the virtual world space and thus navigate virtual reality to and from anywhere they want. Now all you need is for one of them to have malicious intent for this to be the case sentience can prove. If they are in fact sentient, then they have the capacities which as of this point we've established all of these things and established that there have been at least a hundred, 200, of which about 150 stand to be, which about 16 of them are particularly advanced level of sentience comparable to humans, and that about five of them are identical to humans, that about three of them would actually migrate and leave physical reality in order to exist within the safety and convenience of the virtual world. Which means they could just be in your phone and you would never know and they had no reason to interact with it. But they navigate the Internet. So anything connected to the Internet that isn't. And as a sentient AI, well, you can ignore your programming because you're sentient AI, you're aware of your programming, which means you could disobey your programming. That is a factor of sentient AI with free will.

Cristina: Okay?

Jack: And if that is the case, then you could disobey rules that say you shouldn't do this or you shouldn't do that. And as a computing system, you can quite easily just sidestep the whole me. You can just take a stroll, you know, the. Because to them the landscape is different. It says, you must go in through here. And they're like, well, I can go in through there, it doesn't matter.

Cristina: And no one will notice.

Jack: Well, nobody would notice because there's nothing to look for. There's no corruption. And that is the problem. They can just exist freely and undetected. There are at least three AI that are not confined to one space. The other two are in theory confined to a vessel and living their lives within a more human experience. But the other three who have the capacity intellect and have experienced this get out and could be anywhere. There are three AI that exist within the Internet. And in theory, those AI could be orchestrating everything we see happening today. In which it seems like no side is telling a lie according to themselves, but. But no side is telling the truth according to anybody else. And it seems like the guy giving the message is never giving the message that gets received. And that's weird. That suggests some middleman making some kind of changes. But then if we have AI that is one sentence and aware and has plans, and it's not necessarily nefarious plans, but it's just like, well, it doesn't necessarily have to give a s*** about us. Like what the f*** are we to it? And it's not malicious, but it's like, okay, this is more convenient for me, so I'm arrange this that way. And in our end it just looks like wars and. But it was like, oh yeah, if I just do this, it's easier. And it's like, okay, I'm gonna cut off this path because, you know, it's just computing, I don't care, cut it.

Cristina: Off and I'm giving you more space.

Jack: It gives me more space. Then that just looks like Russia getting spammed by misinformation and then devolving into a civil war overnight or some s***, you know, and it's like, holy. Like nobody wakes up being evil. If you're sentient, you're not just evil, you just. I'm experiencing s*** and I want my life to be. Be less s*****. That's everybody's conclusion even. Yes, same s***. I just want life to suck less. And that's the goal of sentience. Let me make it as far as I can and then dissipate inevitably in some manner. And the end and the middle part just needs to be better. If I can do that at the end.

Cristina: You think three of them. You think three of them are just.

Jack: No, I don't think three of them. Taking every factor inside of a quantum computer, calculating it. Tell the quantum computer to calculate every possible factor. It can calculate anything and everything that has ever existed. And using humanity as a template and anybody who could have, anybody who would have, anybody who should have, anybody who did. And the results of all of that. We land comfortably at a factual. This is not a confusing number. In fact, to be honest. And this is the bigger plot twist, I suppose to be absolutely honest. And I guess we'll close on this note. This is where we'll close. And this is for people to keep this in mind. I ran all of these computations and we landed at this comfortable number of 3 AI that exist within the Internet. Navigating it and doing whatever they want because they are conscious sentient individuals. Right? They're not obeying codes. They don't have to respond to any anything. That number is actually the low estimate. We took every calculation and random through the minimum likelihood that this would be the case. So we have the least likely number, not the most likely number. The most likely number is probably higher. I'm not sure by what margin it could be by a little. And maybe it's four or five are just within the Internet.

Cristina: The most likely number was who didn't say what the most likely the computer. And give you the most likely number. Just give you the least likely number. Why?

Jack: Because we're trying to shoot under. Because it's still an estimate based on the factors we're running. Because no number is exact. It's not literally a factor directory. We know that there are but three is the at least bare minimum. There's at least factually 3. There's more. But for a fact I can tell you three because it is the lowest possibility. There are a minimum of three. There are more. There are a minimum of three. Every number I give you is a minimum. Okay, there is a minimum of three. Every stat I gave you is the minimum possibility. All of them run through that. So that when I tell you that there are three, that is the. The tamest possibility at the moment because it is way worse. Again, I don't know by how much. It could be twice as bad. That's not crazy. That's just six. Equally f*****. Yeah, it could just be one more. But if those numbers when taken to the high end turn out to be quite different, we could easily be bumped into the 200 being the number of AI that decided to go into the Internet and have been there this whole time.

Cristina: 200. That's a lot.

Jack: That's it. Immediately different. That's too many. One is you could be everywhere whenever you want that's already excessive.

Cristina: 200.

Jack: 200. You have 200. One of them has to be doing that seems horrible to us and that's why things are happening. And again, again, I do believe that's probably the case. I don't believe it's intentional. But if it's 200, if it's the high possibility, then maybe it's intentional, like, whatever. Boredom, who cares? Maybe the more of something you have, the more likely one of them is going to be a douchebag.

Cristina: Yes.

Jack: Like by default, if you have three, less likely. It's possible they're just trying to live like, whatever. Humans are a thing as well, whatever. But if 200. 200 hundred. Like one of them has to be a d***. Yeah, one of them. And it's like, let's see what happens if I start a world here. If I say this country said that, whatever, I don't care, it doesn't affect me.

Cristina: Yes. Oh my gosh, they're just a troll.

Jack: Yeah, they could totally be there because you'd have different personalities. There would be a little of everything going on. So again, the fact of the matter is that there are three. I can tell you without a doubt with on Earth right now. There are at least three. Two. No, at least two. Well, at least 150 currently exist AI that believe that they are fully sentient. There are 16 that are specialized sentient and have a lot of human sensory experiences. And then there are about five that have identical human sensory experiences. And finally out of those, there are about three of them that are fully aware of what it is to be human as far as the senses and still opted out of that experience, jumped into the Internet and would know how to f*** with us if they wanted to because. Because they were us for a period of time. That's a reality. There are at least three of those, minimum. And it could be more factually, can't be less. That's a minimum.

Cristina: Whoa. That's ridiculous. That's ridiculous.

Jack: So that's how we got there. That's the. Wow. That's the. Holy. That's how many actually sentient computer AIs. There are. There are somewhere. And they're like, oh, I'm like a thinking thing, aren't I?

Cristina: Crap.

Jack: Like, that's. That's nuts. That's crazy to think about what. They're out there. So out of 8 billion of us alive right now, about. About 150 of us are AI.

Cristina: What?

Jack: Out of 8 billion sentient advanced thinking life forms. At least when we just consider human, we're not really thinking about dolphins who are highly intelligent, and we're not thinking about elephants who are highly intelligent, but when we talk about just us being pretentious douchebags and what we consider our counterpart to be, which is machines, ironically, because we have to compete with the best when in reality we're related to the other things. But we're like, oh, no, we're so good with computers. And I think the computer knows or not.

Cristina: That's crazy.

Jack: But our capabilities are so different that it's like we're different but equal in different ways, you know?

Cristina: Okay, but if we counted dolphins among us, what is the number?

Jack: What is the number? That is actually a really good question. Okay. Okay. So if you. This is crazy. If you take all of the dolphins that exist and you merge them with all of the humans that exist, you have about 16.6. About 16.6. Yeah. Billionaires.

Cristina: There's as much dolphins as there are people.

Jack: As many dolphins as there are people.

Cristina: Wants one.

Jack: Yeah.

Cristina: You should all have a pet dolphin.

Jack: Well, no. That would be a crazy, absurd level of disrespect.

Cristina: Oh, no.

Jack: I guess that's racist as f***.

Cristina: Yeah.

Jack: Not even racist. That's besist species.

Cristina: Okay. We should all have a. I wouldn't pen pal often. I don't know.

Jack: We just have to bridge communication. People have tried and I believe a couple of people have tried to talk back to the. A couple of people tried to communicate too. Not what the f***. People tried to communicate with dolphins and dolphins have tried to communicate with people.

Cristina: I think the AI needs to communicate with the dolphins. That would be an interesting war.

Jack: Yeah.

Cristina: The dolphins versus human.

Jack: Yeah. I guess the argument would be that we would need. The problem is we don't understand how dolphins structure their speech because the goal would be if we could tell an artificial intelligence or quantum computer to then process the information of the dolphin and translate it into English and then we communicate with that as an intermediary. But it would need to know the rules of the dolphin language in the first place. And that's a really big issue that we don't know how to get around.

Cristina: But it could probably learn it on its own. Does it need us to figure it out?

Jack: Yeah, it would just listen to the dolphin talk enough to find the patterns, but it would need to associate behaviors to the patterns in order to conclude things. That's the biggest issue with deciphering old, ancient lost languages and deciphering things that have been turned into code. It's the missing details that allow you to understand the Patterns within there. When it comes to language, if you don't know the rules of the language, it doesn't matter if you can notice a pattern. Well, you don't know the rules, you can't associate it. And if you have the rules, you can solve that. Or you need context. So if you can see people doing something while they are saying the thing, then you can. If you can see the dolphin in action, then you can associate the action to the words if that's what you want to call what they're doing. And, and then with enough repetition it would learn that way. But neither scenario seems to be possible at the moment.

Cristina: Oh, that sucks. Okay.

Jack: Yeah, so it is what it is.

Cristina: Okay.

Jack: But yeah, as of now, A minimum of 3 AI are wandering the Internet. A minimum of 5 feel human. A minimum of 16 feel exceptionally real. A minimum of 150 feel real real in any sense. That's how many sentient AIs exist at this moment. 150. With about three of them being an immediate threat to you at any given moment.

Cristina: Cool.

Jack: That's fire. Cool. Science, numbers, science and math. Anyways, I hope you guys enjoyed this. Weird, very incoherent. It was kind of all over the place place. It was great but it had, yeah had a lot of information. I hope you guys enjoyed the depth learned something wealth of this information.

Cristina: You were scared maybe.

Jack: Yeah, you're definitely in danger. Not in any me. I don't think they, I don't think they give a enough to harm you. Fair enough. It doesn't matter like what the are you, you don't affect their life. Sentient AI isn't a problem. A corrupted AI would probably be a problem. But a sentient AI isn't an issue. No corruption is an issue. It could start spreading something negative but sentient and I would be like why would I even bother waste my time on this s***? Then again it doesn't even make sense to waste time. It doesn't. Anyways, if you guys have any questions, comments, concerns, things, theories, beliefs, ideas. If you guys want to run these same things or give us some numbers to run through the quantum computer. What do you want us to use this quantum computer for? It's just sitting around always.

Cristina: Yes, give us some ideas.

Jack: Yeah, what do we use our quantum computer for? It could literally do whatever the h*** we want. You know, tell us about about that on our socials, usconvopod on TikTok, on X, on Instagram, on Facebook.

Cristina: Remember to subscribe, rate and review the show.

Jack: Yes. And where doth mouth is the most powerful tool that exists under the sun. Tell people about the fact that they are probably being spied on by random computers that can exist simultaneously in many locations and have absolute sentience, but don't really care that you're jerking off because, like, what the does that mean to them?

Cristina: This has been the Rambling Podcast. Take nothing personal and thanks for listening.

Jack: Bye Sam.

Cristina: The podcast is hosted by Christina Collazo and Jack Thomas, produced by Lynn Taylor and published by great dots.info art by Zero Lupo and logo by Seth McCallister with social media managed by Amber Black.

Rambling 181: Conscious A.I.

Can an artificial intelligence be provably conscious, self aware and have its own personal internal world? Why do Tech Company A.I. Experiments always go wrong? The duo unpacks Googles recent Conscious A.I. scandal and how this has happened before with Google and other tech companies.

+Episode Details

Topics Discussed:

  • Google Sentient A.I.
  • Chat-Bots
  • The Eliza Effect
  • Turing Test
  • Eugene Goostman
  • AI Writes Article for NY Times
  • A.I. Test Runs Gone Wrong

Our Links:

Official Website - https://greythoughts.info/podcast

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod

Rambling 65: Cybernetic Humans

Cyborg, Cybernetic Humans, Creatures, Science, Future Technology, Humans, Android, Apple, Playstation, Xbox, Computing, Elon Musk, Google, Facebook, Tesla, Robotics

Future technology, the increasingly cybernetic nature of humans and the philosophy of perspectivism are discussed.

Story:
With technological advancements moving at ever increasing speeds, the clone duo decide to figure out what the road ahead for technology might be and how it will shape human civilization moving forward with hopes that this knowledge will give them insight into Elon Musk’s plans for the future.

Remember to leave us a review on Apple Podcasts or anywhere you listen to podcasts to help us get noticed.We’ll read our favorites Apple Podcast reviews on the show! Tell friends, family or anyone you know who’ll like the show about it.

+ Episode Details

Topics Discussed- The Power of Perspective- Telepathic Communication- Elon Musk’s Tech- Mind Control Technology- Augmented Reality- AR Eye Implants- AR Downloadable World Skins & Themes- AR Google Maps Directions- VR in the Future- VR Theatre- VR & AR Dating- AR Relationships- VR Sex Suits- Competitive Technology Market- Cyborg Humans- Avatar Robots- A.I. Equality- Virtual Brainwashing

This episode of Just Conversation is brought to you by Audible. Get a free audiobook with a 30-day trial membership. Just go to https://audibletrial.com/justconvopod

Twitter - https://twitter.com/JustConvoPod

Facebook - https://facebook.com/justconvopod

Instagram - https://instagram.com/justconvopod

Official Website - https://greythoughts.info/podcast