Speaker B 00:08
The topic that we start for today is Q plus A equals I. And this equation is about the
design of ignorance. When we have the question and the answer to it we have information, when we
have the question but we don't have the answer to it we have uncertainty, that is, we don't know the
answer of what we've known. And if we don't know the question and we don't know the answer, then
that equals ignorance which is we don't know what we don't know. This is essentially the biggest
question here when the distributive system of information is kind of controlling the crowd by
designing the ignorance. And yeah I think especially in the era of AI, we should be concerned about
who is designing the AI when AI is like answering most of our questions now and if
we don't know
what questions to ask, how can we progress with the current models for AI?
The Mediator 01:39
So Speaker B has talked about the design of ignorance and the formula of it. So she
was asking how much access we get especially in the society where AI takes control, or well, it is
starting to take control of how much information we get and what kind of information we get. And
also thinking about what kind of questions we can ask the AI and that leads to the amount of
information that we get. So there's a question of the origin of these answers and are we actually
getting the right kind of information?
Speaker A 02:37
I am mostly concerned about this question around AI and the control of how much
information or what kind of information it provides to people in a sense that it controls the truth
or reality that those people who ask those questions live in. And it also, I'm using the word truth
and belief in a similar sense here because I personally think those two terms are interchangeable
especially in this age. And I'm also worried about, as Speaker B stated, the power of AI on shaping
those realities and affecting in turn the physical world and how we understand it. I think it has
the potential to redefine the world that we live in right now.
The Mediator 04:27
Speaker A has expressed his concern with the AI and how much information we get from
AI because he thinks that it controls the truth and reality which shapes our belief of the physical
world that's around us. So he's asking, well, he's questioning the credibility of the information
that we get from AI. When we think that AI can be the answer to everything it knows everything but
then does it actually know everything and are we getting the right kind of information.
Speaker B 05:14
I think yes the problem with the current AI Model is that it generates texts,
generates answers from the knowledge pool of humans and that essentially leads to a problem that
we're like stuck in the loop of knowing what we've known and cannot really step outside and I think
the credibility of AI is such a weird topic. Oh, that is, do you think right now we need to deal
with false information more or do you think we should deal with stepping outside of the comfort zone
of our current knowledge? I am wondering, what's your thought on it?
The Mediator 06:29
Okay I mean I'm adding more to it actually. Speaker B has also expressed the problem
with the AI Model because it derives from the pool of human knowledge. So in a way we think that the
AI knows everything and we're getting the ultimate knowledge out of it. But if you think about it,
it is actually based on the knowledge that we have and is not generating its own kind of
revolutionary knowledge. So she was asking, do we like, at this point of time should we be dealing
with false information that we might be receiving or should we be working on expanding the current I
guess scope of knowledge that AI can provide us? So should we be controlling the false information
at this point or should we just keep on moving forward and see where it goes?
Speaker A 08:22
So on the points that Speaker B has made, there is one answer and one question that
arises from my mind. I want to start with the answer to Speaker B's question of, should we be
worried about the false information that AI generates or should we try to expand the scope of the
technology further and see what happens. If I had to choose between those two I would say I would be
more worried of generated false information and those informations perpetuating because I'm really
interested in the way the journalism and the documentation of modern media works. AI generated
information or contents I don't think they really help alleviate any problems that's already
existing in this state of journalism in this state of mass media I think there is already plenty of
false information circulating which is generated by humans. Letting AI into the cycle would add to
the chaos. And on the point that was made before–that the information that AI generates is based on
human knowledge so there's nothing there's going to be nothing revolutionary–I want to ask Speaker
B, if she thinks the bandwidth or the capacity of processing so much of a data at once or in such a
short amount of time would result in anything different than what humans are capable of. Because the
way I see it, the most significant difference between AI and humans is the processing speed; it's so
much, much more than the human brain capacity.
The Mediator 12:25
So he has an answer and a question for you. He says that the false information is
worrying, worrisome. And he says that the kind of information that we get from AI is very similar to
the way humans distribute information through journalism and documentation. Because if you think
about not just the media but even history, the things that we've learned, it's all very selective
and subjective. So he is also very worried about the kind of information we get from AI because we
don't really know what it's selecting from. So he wanted to ask you about do you think that, well,
the difference between AI and humans is that for us it takes a long time to retrieve information.
But with AI the processing speed is so quick. So if we just continue to develop on that aspect of AI
because of its processing speed, don't you think that the kind of information that we get will be
different because it has the capacity to gather so much more oh, gather so much more in such a short
amount of time. So from that don't you think we'll get new information?
Speaker B 14:10
I think the speed of AI really does not affect the quality of the information. It has
some kind of selectiveness and subjectiveness to it [similar] to traditional journalism. In fact, I
feel like if we are fixated on the topic of false information, then AI might be much better because
it will gather so much more information from all other sources. And I think we, like the developers,
are trying to eliminate the false information based on how they train it. So it's like in my
opinion, the false information is a problem that's solvable because once you are aware of it, you
will try to find the truth to it, to the true information if we want to say that. And the same as
with journalism, like once people find out the truth, they will start to doubt the news resource and
the reliability of the news would decrease. So there is a, what I'm saying is that there is a
solution to it. But if for ignorance we really don't know what to ask them, the ignorance will stay,
it will remain. We won't even start to think about what is outside of the bubble we are in. So I
feel that's more concerning. I'm not sure about the issue of speed because no matter how fast or how
slow information spreads, there is still going to be false information and the collective nature of
AI decreases the possibility of false information.
The Mediator 16:41
Speaker B has said that the processing speed does not have any control over the
quality of information. So I want to know how you feel about that. And also she has said that the
false information can be solvable once we are aware of it but I'm not sure how, I want to know how
you feel about that because how do we know when the information is false and how do we know when
it's the truth when we're only seeing one side of the story all the time right we don't always know
both sides of the story.
Speaker A 17:43
On the first point that she made on speed not being the quality of the information:
The question that I asked was just out of curiosity so I don't actually have any opinion on that or
I'm not fit to take any sides on that but I guess I understand that point.
And on the question of false information being able to be solved, I am quite doubtful because false
information isn't problematic because it's not verifiable but just having a false information
around(even though everyone is aware of it being a false information) already is problematic. We see
this every time some allegation turns out to be false. The effect of the information having been
circulated persists even though it's proven to be wrong.
The Mediator 19:42
Speaker A is doubtful that false information can be solvable because he thinks that
even when we're aware that it's false, we continue to distribute the information for instance, like
even when there's like allegations and then it turns out it's actually not true. People continue to
generate information and distribute it. So it is in our nature to continue doing that. So he thinks
that the false information that AI gives is not really an issue because it's not really an issue for
him. Hm. Let's see how you feel.
Speaker B 20:51
I am confused by the Speaker A's point because his conclusion contradicts to what he's
saying because if you think misinformation is insolvable and AI is like adding seriousness to the
situation, then it is a problem, right but still I disagree with him that the false information is
insolvable because eventually there are at least a part of the public would know the truth whether
or not the influence of false information still exists, like the truth is there somewhere and people
would discover it sooner or later. It could still exist for like a 1000 years but still some people
would discover the truth eventually and I think that's in a way enough. Yeah, that's my take on it
and I think his conclusion contradicts. If he thinks falseness is serious and insolvable then AI is
deteriorating the situation. Yeah I don't have anything to say beyond that.
The Mediator 23:06
Speaker A has said that yes it is true that even when we're all aware of the falseness
of the information that we receive, we continue to distribute and the information gets circulated,
right but there's always going to be someone who knows the truth and there is always truth that
persists. My question is … This is hard because as humans, we don't know what the truth is. Then how
do we train the AI Model to give us the truth when we don't know what truth is and how do we expect
AI to solve that issue when in a way like it is in our nature to just yes like we want to discover
the truth but sometimes really it's more about, I find that sometimes it's not about the credibility
of the information, but it's about the act of gaining knowledge and sharing with people around you.
And I think we take great pleasure from that. So I think that's why even in the media, like you
said, with journalism and the media, we just keep making new information and then it just never
stops. So I feel like it's very contradictory like how we approach AI because we don't even know
what truth is like what is truth.
Speaker A 25:29
That aligns with my point being made. It's not about how valid the information is.
The problem is that it worsens the situation. Now especially under the logics of capitalism
obviously and that capitalist logic dictates what kind of information should be rewarded when
distributed.
I agree that the idea of us trusting AI to give us the truth is contradictory. It's a paradox by
itself because AI bases its own data or database. It is based on human knowledge or the information
that humans have accrued by now so far. Putting AI into the table, it just speeds up this negative
feedback loop where unhelpful information circulates. It would operate exactly or similar to how
everything operates right now without AI. Even now information that pulls more views is rewarded and
that again goes back into the system to be more exposed to be consumed more which goes into this
feedback loop. AI would make the process so much easier and shorter. Maybe this will get out of
hand.
The Mediator 28:40
So my question for you is, well, the technology that we have specifically AI is under
capitalist logic, right is more about consumption and it dictates the information that we get. So
there is a paradox because we expect the AI to give us the truth. But how do we train it to give us
the truth when it is under capitalism. So the information that we receive, it's not always the
truth. We think that it's the truth but it's not, it's always regulated. So he thinks that the AI
will probably make that process a little easier and shorter. But what is your take on that? How do
we train it to make our lives better? What do you think?
Speaker B 29:49
This is exactly what I was talking about in the beginning like if, if, like we expect
AI to give a truth while it's like actually trained under the instruction of someone else then we
are in the loop of, we are like strengthening the given truth to, to shape our reality. So I think
there's no answer to it. We need to hold those owners of the AI at stake. I think yeah so we have to
hold them in liability and we cannot trust them 100 percent. We have to have laws and regulations
for their decisions. So yeah, it's mostly about the ethics of AI. And I do think in order to achieve
at least some sort of truth yeah that's like we need to hold those people in liability. Yeah we
cannot trust them. We need to ask them to, at least promise to stick to the truth. But I'm like in a
broader sense even though those people promise us to give us the knowledge that's out there, that's
the, that's also the thing I'm talking about in the beginning like we don't know what we don't know.
So it's all trained on what we've already known. So if there's no technological paper on
breakthroughs in certain fields, for example, biology, chemistry, whatever, then the AI will not
never make the breakthrough itself. You know, like that's an example to simplify it but it can go
beyond just scientific findings to philosophy and anthropology, etc. That's like my major concern in
the current state. But I think this is hard. This is a very difficult issue to solve right and I
think also like in the current stage are we like the developers are feeding AI synthetically
generated text and images to itself in order to kind of train it but they find out the model will
just get worse so it will not be as good as before if it consumes itself. So where is the extra
knowledge?
Where does the extra knowledge come from? It can only come from humans but we're not generating more
knowledge for it to consume. I think it kind of also goes to that we are now the means of product
for the AI. We are like producing for and serving the AI because if we hope it to get better then we
will ultimately become its workforce. Yeah that's also how AI works under sort of a capitalist
society. I feel like there are a lot of aspects to it but I don't want to expand on it too much
because we don't have time.
The Mediator 33:59
So if AI is feeding on the information that we're giving them and if there's no
breakthrough in our knowledge of the world in any kind of field like biology, chemistry, any kind of
field, then there will be no breakthrough in AI either. So she's claiming that in a way we're like
the workforce for AI. And so there's this question of regulation ethics in AI and that's more of her
concern rather than like the false information changing our perception of the world. It's more about
how much control. We have to be aware that the information that we get might not always be the
truth.
We have to learn how to make use of it.
So I mean the AI models are continuing to develop right I guess my question would be like, so what
would be the limit do you think for it to like? When is it going to reach its limit? Because if it
is based on our knowledge of the world and we're not making as much progress, there's only so much
we can do to develop the Model. So when do you think it's going to be the limit of technology? Yeah
let's keep it that way.
Speaker A 36:48
It's really interesting to point out that there wouldn't be a breakthrough point on
the development of the technology. That would mean that all the predictions of the revolution that
it's going to bring are day dreams. I've never thought of it that way.
On the point of humans becoming the workforce for AI is interesting because then it boils down, the
price of labor. AI is a hot topic among entrepreneurs because it can replace a lot of labor that is
existing right now. In a lot of manufacturing systems. Predicting that humans would then become the
workforce that feeds information into AI… I don't know what to make of it but it's really
interesting and a grim imagination.
If that is true that there wouldn't be a breakthrough I guess then nothing would change other than a
couple of bubbles forming and popping. I mean we see bubbles in creative industries right now that
the technology causes. Some I think have already popped.
That would be really underwhelming though. That would mean that nothing meaningful has happened.
The Mediator 40:50
I'm basing this based on the speaker A's statement and also your statement. So if
there is no breakthrough in our knowledge of the world like you said like physics, chemistry, then
there will be no breakthrough in AI either right but then what do you think would be then are you
saying that because at this point we're not making much progress in discovering new knowledge that
what's the point of … like the development in AI will be stagnant meaning like it's not going to go
as much and it's going to reach its limit. Then should we not like, should we … is this the right
time to be developing further on these AI models? When based on your claim, if it's reflective of
our current knowledge of the world, then what's the point? Should we be just working on our
knowledge of the world? What do you think?
Speaker B 42:14
I feel like I'm 100 percent in support of we should diverge from putting so much
effort in trying to advance beyond the current point of AI because well, besides, maybe we could
invent newer models but right now I feel like all the models are just training on different or more
databases, which is honestly not helpful. I mean there are already papers coming out this August
stating that giving it more information wouldn't necessarily improve the Model and it could in
reverse damage it. And I feel like the current stage of AI it's helping us to solve a lot of
technical problems. For example, you could definitely ask it to solve your homework like
mathematical questions, whatever, but honestly it cannot really create anything new for you and the
researchers definitely won't ask it to solve the question for them. That's for sure. So I feel like,
yes, we as a society should definitely diverge from investing so much into AI. And honestly, the
investment is also damaging our environment because it consumes so much electricity and that is
really polluting the Earth. Yes, that's my stand on it.
The Mediator 44:03
Speaker B is in favor of diverging from AI development. Feeding more information into
the AI models is actually damaging to the world that we're living in right now.
Let me think about this.
I mean like you said, there's many investments that are going into this field, right? Because soon
it is going to be implemented in our daily lives and it is going to make our lives so much easier
and more convenient. But based on her statement, it is not going to generate any new kind of
information. So there's only so much that I can do. So what's the point of putting so much energy
and investment into this field when we see the limitation to it? Since this is the last question,
what do you think this is heading to? Where do you think this is heading to from the perspective of
a designer? Because I mean, yes, it is going to be prevalent in our lives and but then after that,
what happens right? Because it is not going to generate new information on its own. So what happens
next? What do you think is next? It's so broad.
Speaker A 47:00
Diverging from AI development… I wouldn't say if I agree to that opinion or not. What
I'm sure of is that it's not going to stop. People are not going to stop trying to feed more stuff
into this model.
What do I think would happen as a designer? A lot of the artists see the influence AI is bringing to
an artist's practice or designer's practice. I see some AI generated images legit, a lot of it
really just boring. If AI won't have a revolutionary breakthrough, this is all going to settle down.
AI will be part of our process of generating images, generating designs.
But designers being replaced by AI? I don't think the situation would not turn out as bad.