Speaker B 00:08
The Mediator 01:39
So Speaker B has talked about the design of ignorance and the formula of it. So she was asking how much access we get especially in the society where AI takes control, or well, it is starting to take control of how much information we get and what kind of information we get. And also thinking about what kind of questions we can ask the AI and that leads to the amount of information that we get. So there's a question of the origin of these answers and are we actually getting the right kind of information?

Speaker A 02:37
The Mediator 04:27
Speaker A has expressed his concern with the AI and how much information we get from AI because he thinks that it controls the truth and reality which shapes our belief of the physical world that's around us. So he's asking, well, he's questioning the credibility of the information that we get from AI. When we think that AI can be the answer to everything it knows everything but then does it actually know everything and are we getting the right kind of information.

Speaker B 05:14
The Mediator 06:29
Okay I mean I'm adding more to it actually. Speaker B has also expressed the problem with the AI Model because it derives from the pool of human knowledge. So in a way we think that the AI knows everything and we're getting the ultimate knowledge out of it. But if you think about it, it is actually based on the knowledge that we have and is not generating its own kind of revolutionary knowledge. So she was asking, do we like, at this point of time should we be dealing with false information that we might be receiving or should we be working on expanding the current I guess scope of knowledge that AI can provide us? So should we be controlling the false information at this point or should we just keep on moving forward and see where it goes?

Speaker A 08:22
The Mediator 12:25
So he has an answer and a question for you. He says that the false information is worrying, worrisome. And he says that the kind of information that we get from AI is very similar to the way humans distribute information through journalism and documentation. Because if you think about not just the media but even history, the things that we've learned, it's all very selective and subjective. So he is also very worried about the kind of information we get from AI because we don't really know what it's selecting from. So he wanted to ask you about do you think that, well, the difference between AI and humans is that for us it takes a long time to retrieve information. But with AI the processing speed is so quick. So if we just continue to develop on that aspect of AI because of its processing speed, don't you think that the kind of information that we get will be different because it has the capacity to gather so much more oh, gather so much more in such a short amount of time. So from that don't you think we'll get new information?

Speaker B 14:10
The Mediator 16:41
Speaker B has said that the processing speed does not have any control over the quality of information. So I want to know how you feel about that. And also she has said that the false information can be solvable once we are aware of it but I'm not sure how, I want to know how you feel about that because how do we know when the information is false and how do we know when it's the truth when we're only seeing one side of the story all the time right we don't always know both sides of the story.

Speaker A 17:43
The Mediator 19:42
Speaker A is doubtful that false information can be solvable because he thinks that even when we're aware that it's false, we continue to distribute the information for instance, like even when there's like allegations and then it turns out it's actually not true. People continue to generate information and distribute it. So it is in our nature to continue doing that. So he thinks that the false information that AI gives is not really an issue because it's not really an issue for him. Hm. Let's see how you feel.

Speaker B 20:51
The Mediator 23:06
Speaker A has said that yes it is true that even when we're all aware of the falseness of the information that we receive, we continue to distribute and the information gets circulated, right but there's always going to be someone who knows the truth and there is always truth that persists. My question is … This is hard because as humans, we don't know what the truth is. Then how do we train the AI Model to give us the truth when we don't know what truth is and how do we expect AI to solve that issue when in a way like it is in our nature to just yes like we want to discover the truth but sometimes really it's more about, I find that sometimes it's not about the credibility of the information, but it's about the act of gaining knowledge and sharing with people around you. And I think we take great pleasure from that. So I think that's why even in the media, like you said, with journalism and the media, we just keep making new information and then it just never stops. So I feel like it's very contradictory like how we approach AI because we don't even know what truth is like what is truth.

Speaker A 25:29
The Mediator 28:40
So my question for you is, well, the technology that we have specifically AI is under capitalist logic, right is more about consumption and it dictates the information that we get. So there is a paradox because we expect the AI to give us the truth. But how do we train it to give us the truth when it is under capitalism. So the information that we receive, it's not always the truth. We think that it's the truth but it's not, it's always regulated. So he thinks that the AI will probably make that process a little easier and shorter. But what is your take on that? How do we train it to make our lives better? What do you think?

Speaker B 29:49
The Mediator 33:59
So if AI is feeding on the information that we're giving them and if there's no breakthrough in our knowledge of the world in any kind of field like biology, chemistry, any kind of field, then there will be no breakthrough in AI either. So she's claiming that in a way we're like the workforce for AI. And so there's this question of regulation ethics in AI and that's more of her concern rather than like the false information changing our perception of the world. It's more about how much control. We have to be aware that the information that we get might not always be the truth. We have to learn how to make use of it. So I mean the AI models are continuing to develop right I guess my question would be like, so what would be the limit do you think for it to like? When is it going to reach its limit? Because if it is based on our knowledge of the world and we're not making as much progress, there's only so much we can do to develop the Model. So when do you think it's going to be the limit of technology? Yeah let's keep it that way.

Speaker A 36:48
The Mediator 40:50
I'm basing this based on the speaker A's statement and also your statement. So if there is no breakthrough in our knowledge of the world like you said like physics, chemistry, then there will be no breakthrough in AI either right but then what do you think would be then are you saying that because at this point we're not making much progress in discovering new knowledge that what's the point of … like the development in AI will be stagnant meaning like it's not going to go as much and it's going to reach its limit. Then should we not like, should we … is this the right time to be developing further on these AI models? When based on your claim, if it's reflective of our current knowledge of the world, then what's the point? Should we be just working on our knowledge of the world? What do you think?

Speaker B 42:14
The Mediator 44:03
Speaker B is in favor of diverging from AI development. Feeding more information into the AI models is actually damaging to the world that we're living in right now. Let me think about this. I mean like you said, there's many investments that are going into this field, right? Because soon it is going to be implemented in our daily lives and it is going to make our lives so much easier and more convenient. But based on her statement, it is not going to generate any new kind of information. So there's only so much that I can do. So what's the point of putting so much energy and investment into this field when we see the limitation to it? Since this is the last question, what do you think this is heading to? Where do you think this is heading to from the perspective of a designer? Because I mean, yes, it is going to be prevalent in our lives and but then after that, what happens right? Because it is not going to generate new information on its own. So what happens next? What do you think is next? It's so broad.

Speaker A 47:00