Speaker B 00:08
The Mediator 01:39
So Speaker B has talked about the design of ignorance and the formula of it. So she was asking how much access we get especially in the society where AI takes control, or well, it is starting to take control of how much information we get and what kind of information we get. And also thinking about what kind of questions we can ask the AI and that leads to the amount of information that we get. So there's a question of the origin of these answers and are we actually getting the right kind of information?

Speaker A 02:37
I am mostly concerned about this question around AI and the control of how much information or what kind of information it provides to people in a sense that it controls the truth or reality that those people who ask those questions live in. And it also, I'm using the word truth and belief in a similar sense here because I personally think those two terms are interchangeable especially in this age. And I'm also worried about, as Speaker B stated, the power of AI on shaping those realities and affecting in turn the physical world and how we understand it. I think it has the potential to redefine the world that we live in right now.

The Mediator 04:27
Speaker B 05:14
The Mediator 06:29
Okay I mean I'm adding more to it actually. Speaker B has also expressed the problem with the AI Model because it derives from the pool of human knowledge. So in a way we think that the AI knows everything and we're getting the ultimate knowledge out of it. But if you think about it, it is actually based on the knowledge that we have and is not generating its own kind of revolutionary knowledge. So she was asking, do we like, at this point of time should we be dealing with false information that we might be receiving or should we be working on expanding the current I guess scope of knowledge that AI can provide us? So should we be controlling the false information at this point or should we just keep on moving forward and see where it goes?

Speaker A 08:22
So on the points that Speaker B has made, there is one answer and one question that arises from my mind. I want to start with the answer to Speaker B's question of, should we be worried about the false information that AI generates or should we try to expand the scope of the technology further and see what happens. If I had to choose between those two I would say I would be more worried of generated false information and those informations perpetuating because I'm really interested in the way the journalism and the documentation of modern media works. AI generated information or contents I don't think they really help alleviate any problems that's already existing in this state of journalism in this state of mass media I think there is already plenty of false information circulating which is generated by humans. Letting AI into the cycle would add to the chaos. And on the point that was made before–that the information that AI generates is based on human knowledge so there's nothing there's going to be nothing revolutionary–I want to ask Speaker B, if she thinks the bandwidth or the capacity of processing so much of a data at once or in such a short amount of time would result in anything different than what humans are capable of. Because the way I see it, the most significant difference between AI and humans is the processing speed; it's so much, much more than the human brain capacity.

The Mediator 12:25
Speaker B 14:10
The Mediator 16:41
Speaker B has said that the processing speed does not have any control over the quality of information. So I want to know how you feel about that. And also she has said that the false information can be solvable once we are aware of it but I'm not sure how, I want to know how you feel about that because how do we know when the information is false and how do we know when it's the truth when we're only seeing one side of the story all the time right we don't always know both sides of the story.

Speaker A 17:43
On the first point that she made on speed not being the quality of the information: The question that I asked was just out of curiosity so I don't actually have any opinion on that or I'm not fit to take any sides on that but I guess I understand that point. And on the question of false information being able to be solved, I am quite doubtful because false information isn't problematic because it's not verifiable but just having a false information around(even though everyone is aware of it being a false information) already is problematic. We see this every time some allegation turns out to be false. The effect of the information having been circulated persists even though it's proven to be wrong.

The Mediator 19:42
Speaker B 20:51
The Mediator 23:06
Speaker A has said that yes it is true that even when we're all aware of the falseness of the information that we receive, we continue to distribute and the information gets circulated, right but there's always going to be someone who knows the truth and there is always truth that persists. My question is … This is hard because as humans, we don't know what the truth is. Then how do we train the AI Model to give us the truth when we don't know what truth is and how do we expect AI to solve that issue when in a way like it is in our nature to just yes like we want to discover the truth but sometimes really it's more about, I find that sometimes it's not about the credibility of the information, but it's about the act of gaining knowledge and sharing with people around you. And I think we take great pleasure from that. So I think that's why even in the media, like you said, with journalism and the media, we just keep making new information and then it just never stops. So I feel like it's very contradictory like how we approach AI because we don't even know what truth is like what is truth.

Speaker A 25:29
That aligns with my point being made. It's not about how valid the information is. The problem is that it worsens the situation. Now especially under the logics of capitalism obviously and that capitalist logic dictates what kind of information should be rewarded when distributed. I agree that the idea of us trusting AI to give us the truth is contradictory. It's a paradox by itself because AI bases its own data or database. It is based on human knowledge or the information that humans have accrued by now so far. Putting AI into the table, it just speeds up this negative feedback loop where unhelpful information circulates. It would operate exactly or similar to how everything operates right now without AI. Even now information that pulls more views is rewarded and that again goes back into the system to be more exposed to be consumed more which goes into this feedback loop. AI would make the process so much easier and shorter. Maybe this will get out of hand.

The Mediator 28:40
Speaker B 29:49
The Mediator 33:59
So if AI is feeding on the information that we're giving them and if there's no breakthrough in our knowledge of the world in any kind of field like biology, chemistry, any kind of field, then there will be no breakthrough in AI either. So she's claiming that in a way we're like the workforce for AI. And so there's this question of regulation ethics in AI and that's more of her concern rather than like the false information changing our perception of the world. It's more about how much control. We have to be aware that the information that we get might not always be the truth. We have to learn how to make use of it. So I mean the AI models are continuing to develop right I guess my question would be like, so what would be the limit do you think for it to like? When is it going to reach its limit? Because if it is based on our knowledge of the world and we're not making as much progress, there's only so much we can do to develop the Model. So when do you think it's going to be the limit of technology? Yeah let's keep it that way.

Speaker A 36:48
It's really interesting to point out that there wouldn't be a breakthrough point on the development of the technology. That would mean that all the predictions of the revolution that it's going to bring are day dreams. I've never thought of it that way. On the point of humans becoming the workforce for AI is interesting because then it boils down, the price of labor. AI is a hot topic among entrepreneurs because it can replace a lot of labor that is existing right now. In a lot of manufacturing systems. Predicting that humans would then become the workforce that feeds information into AI… I don't know what to make of it but it's really interesting and a grim imagination. If that is true that there wouldn't be a breakthrough I guess then nothing would change other than a couple of bubbles forming and popping. I mean we see bubbles in creative industries right now that the technology causes. Some I think have already popped. That would be really underwhelming though. That would mean that nothing meaningful has happened.

The Mediator 40:50
Speaker B 42:14
The Mediator 44:03
Speaker B is in favor of diverging from AI development. Feeding more information into the AI models is actually damaging to the world that we're living in right now. Let me think about this. I mean like you said, there's many investments that are going into this field, right? Because soon it is going to be implemented in our daily lives and it is going to make our lives so much easier and more convenient. But based on her statement, it is not going to generate any new kind of information. So there's only so much that I can do. So what's the point of putting so much energy and investment into this field when we see the limitation to it? Since this is the last question, what do you think this is heading to? Where do you think this is heading to from the perspective of a designer? Because I mean, yes, it is going to be prevalent in our lives and but then after that, what happens right? Because it is not going to generate new information on its own. So what happens next? What do you think is next? It's so broad.

Speaker A 47:00
Diverging from AI development… I wouldn't say if I agree to that opinion or not. What I'm sure of is that it's not going to stop. People are not going to stop trying to feed more stuff into this model. What do I think would happen as a designer? A lot of the artists see the influence AI is bringing to an artist's practice or designer's practice. I see some AI generated images legit, a lot of it really just boring. If AI won't have a revolutionary breakthrough, this is all going to settle down. AI will be part of our process of generating images, generating designs. But designers being replaced by AI? I don't think the situation would not turn out as bad.