Speaker B 00:08
The topic that we start for today is Q plus A equals I. And this equation is about the design of ignorance. When we have the question and the answer to it we have information, when we have the question but we don't have the answer to it we have uncertainty, that is, we don't know the answer of what we've known. And if we don't know the question and we don't know the answer, then that equals ignorance which is we don't know what we don't know. This is essentially the biggest question here when the distributive system of information is kind of controlling the crowd by designing the ignorance. And yeah I think especially in the era of AI, we should be concerned about who is designing the AI when AI is like answering most of our questions now and if we don't know what questions to ask, how can we progress with the current models for AI?

The Mediator 01:39
Speaker A 02:37
The Mediator 04:27
Speaker A has expressed his concern with the AI and how much information we get from AI because he thinks that it controls the truth and reality which shapes our belief of the physical world that's around us. So he's asking, well, he's questioning the credibility of the information that we get from AI. When we think that AI can be the answer to everything it knows everything but then does it actually know everything and are we getting the right kind of information.

Speaker B 05:14
I think yes the problem with the current AI Model is that it generates texts, generates answers from the knowledge pool of humans and that essentially leads to a problem that we're like stuck in the loop of knowing what we've known and cannot really step outside and I think the credibility of AI is such a weird topic. Oh, that is, do you think right now we need to deal with false information more or do you think we should deal with stepping outside of the comfort zone of our current knowledge? I am wondering, what's your thought on it?

The Mediator 06:29
Speaker A 08:22
The Mediator 12:25
So he has an answer and a question for you. He says that the false information is worrying, worrisome. And he says that the kind of information that we get from AI is very similar to the way humans distribute information through journalism and documentation. Because if you think about not just the media but even history, the things that we've learned, it's all very selective and subjective. So he is also very worried about the kind of information we get from AI because we don't really know what it's selecting from. So he wanted to ask you about do you think that, well, the difference between AI and humans is that for us it takes a long time to retrieve information. But with AI the processing speed is so quick. So if we just continue to develop on that aspect of AI because of its processing speed, don't you think that the kind of information that we get will be different because it has the capacity to gather so much more oh, gather so much more in such a short amount of time. So from that don't you think we'll get new information?

Speaker B 14:10
I think the speed of AI really does not affect the quality of the information. It has some kind of selectiveness and subjectiveness to it [similar] to traditional journalism. In fact, I feel like if we are fixated on the topic of false information, then AI might be much better because it will gather so much more information from all other sources. And I think we, like the developers, are trying to eliminate the false information based on how they train it. So it's like in my opinion, the false information is a problem that's solvable because once you are aware of it, you will try to find the truth to it, to the true information if we want to say that. And the same as with journalism, like once people find out the truth, they will start to doubt the news resource and the reliability of the news would decrease. So there is a, what I'm saying is that there is a solution to it. But if for ignorance we really don't know what to ask them, the ignorance will stay, it will remain. We won't even start to think about what is outside of the bubble we are in. So I feel that's more concerning. I'm not sure about the issue of speed because no matter how fast or how slow information spreads, there is still going to be false information and the collective nature of AI decreases the possibility of false information.

The Mediator 16:41
Speaker A 17:43
The Mediator 19:42
Speaker A is doubtful that false information can be solvable because he thinks that even when we're aware that it's false, we continue to distribute the information for instance, like even when there's like allegations and then it turns out it's actually not true. People continue to generate information and distribute it. So it is in our nature to continue doing that. So he thinks that the false information that AI gives is not really an issue because it's not really an issue for him. Hm. Let's see how you feel.

Speaker B 20:51
I am confused by the Speaker A's point because his conclusion contradicts to what he's saying because if you think misinformation is insolvable and AI is like adding seriousness to the situation, then it is a problem, right but still I disagree with him that the false information is insolvable because eventually there are at least a part of the public would know the truth whether or not the influence of false information still exists, like the truth is there somewhere and people would discover it sooner or later. It could still exist for like a 1000 years but still some people would discover the truth eventually and I think that's in a way enough. Yeah, that's my take on it and I think his conclusion contradicts. If he thinks falseness is serious and insolvable then AI is deteriorating the situation. Yeah I don't have anything to say beyond that.

The Mediator 23:06
Speaker A 25:29
The Mediator 28:40
So my question for you is, well, the technology that we have specifically AI is under capitalist logic, right is more about consumption and it dictates the information that we get. So there is a paradox because we expect the AI to give us the truth. But how do we train it to give us the truth when it is under capitalism. So the information that we receive, it's not always the truth. We think that it's the truth but it's not, it's always regulated. So he thinks that the AI will probably make that process a little easier and shorter. But what is your take on that? How do we train it to make our lives better? What do you think?

Speaker B 29:49
This is exactly what I was talking about in the beginning like if, if, like we expect AI to give a truth while it's like actually trained under the instruction of someone else then we are in the loop of, we are like strengthening the given truth to, to shape our reality. So I think there's no answer to it. We need to hold those owners of the AI at stake. I think yeah so we have to hold them in liability and we cannot trust them 100 percent. We have to have laws and regulations for their decisions. So yeah, it's mostly about the ethics of AI. And I do think in order to achieve at least some sort of truth yeah that's like we need to hold those people in liability. Yeah we cannot trust them. We need to ask them to, at least promise to stick to the truth. But I'm like in a broader sense even though those people promise us to give us the knowledge that's out there, that's the, that's also the thing I'm talking about in the beginning like we don't know what we don't know. So it's all trained on what we've already known. So if there's no technological paper on breakthroughs in certain fields, for example, biology, chemistry, whatever, then the AI will not never make the breakthrough itself. You know, like that's an example to simplify it but it can go beyond just scientific findings to philosophy and anthropology, etc. That's like my major concern in the current state. But I think this is hard. This is a very difficult issue to solve right and I think also like in the current stage are we like the developers are feeding AI synthetically generated text and images to itself in order to kind of train it but they find out the model will just get worse so it will not be as good as before if it consumes itself. So where is the extra knowledge? Where does the extra knowledge come from? It can only come from humans but we're not generating more knowledge for it to consume. I think it kind of also goes to that we are now the means of product for the AI. We are like producing for and serving the AI because if we hope it to get better then we will ultimately become its workforce. Yeah that's also how AI works under sort of a capitalist society. I feel like there are a lot of aspects to it but I don't want to expand on it too much because we don't have time.

The Mediator 33:59
Speaker A 36:48
The Mediator 40:50
I'm basing this based on the speaker A's statement and also your statement. So if there is no breakthrough in our knowledge of the world like you said like physics, chemistry, then there will be no breakthrough in AI either right but then what do you think would be then are you saying that because at this point we're not making much progress in discovering new knowledge that what's the point of … like the development in AI will be stagnant meaning like it's not going to go as much and it's going to reach its limit. Then should we not like, should we … is this the right time to be developing further on these AI models? When based on your claim, if it's reflective of our current knowledge of the world, then what's the point? Should we be just working on our knowledge of the world? What do you think?

Speaker B 42:14
I feel like I'm 100 percent in support of we should diverge from putting so much effort in trying to advance beyond the current point of AI because well, besides, maybe we could invent newer models but right now I feel like all the models are just training on different or more databases, which is honestly not helpful. I mean there are already papers coming out this August stating that giving it more information wouldn't necessarily improve the Model and it could in reverse damage it. And I feel like the current stage of AI it's helping us to solve a lot of technical problems. For example, you could definitely ask it to solve your homework like mathematical questions, whatever, but honestly it cannot really create anything new for you and the researchers definitely won't ask it to solve the question for them. That's for sure. So I feel like, yes, we as a society should definitely diverge from investing so much into AI. And honestly, the investment is also damaging our environment because it consumes so much electricity and that is really polluting the Earth. Yes, that's my stand on it.

The Mediator 44:03
Speaker A 47:00