OpenAI Board's "Separate But Equal" Moment | Should Open AI Stay as a Non-Profit?
Download MP3Hi everyone, I am Ken Chan, the
Authenticity expert. Each week our
Freedom by Design podcast will dive deep
into one current event to discuss and we
will have our Digital Freedom Channel's
own experts to offer perspectives that
spread across historical, legal,
psychological and global leadership.
This week's episode is one that you don't
want to miss if you want to learn about a
I and why it matters for companies and
their board and executives. To be
authentic in any important
decision-making process and have a
kinetic communications plan. I hope you
enjoy this podcast and click subscribe
so
This is Freedom by Design. In each
episode, we scout around all four corners
of the world, providing 3 perspectives
across historical, humanistic, and
economic lenses, discussing 2
interrelated questions, helping you
design your own digital freedom destiny.
I'm Sam Adams, show creator at Freedom
Channel, historian and global political
analyst. I bring the facts, the context,
and the implications you didn't ask for,
but will definitely need at your next
cocktail party. With me is my co-host
Ken Chan, our executive producer, our
very own Andy Cohen at Freedom Channel.
But for the digital world, let's begin
Thanks, Sam. That's exactly why we're
our design.
here, because digital power isn't
abstract. It's about leadership,
control, and the real world impact of the
decisions behind closed doors. Our job?
To break it down. Who's making the rules?
Who's bending them?Why are they acting
this way?And most importantly, how does
it affect your digital freedom?In each
episode, we will spend about 15 minutes
on a burning hot topic of the week in a
conversation between me and Ken.
Then Doctor Carmen Diaz will join us and
lead the conversation about the second
topic, which is more around the
development of digital freedom, A I, and
how it may impact our human lives and
emotions. Ken, what's on the table this
week?
This week we will dive into Open AI's
The Open AI Board's communication
board decision to turn down Elon Musk's
strategy, or lack thereof,
$98 billion dollar shotgun wedding
demonstrates a significant lack of
relational agility. They've missed a
proposal. Did they make the right call
crucial opportunity to build trust and
and have they convinced us that it is the
transparency with their stakeholders.
right call?After that, Carmen and I are
The board's reliance on legalistic,
going to use the Auth Q framework to
analyze the authenticity metrics around
minimalist responses created a vacuum of
the board's action.
information, fueling speculation and
mistrust.
Finally, we have an extra special
one-on-one fireside chat when Sam caught
up with his high school friend. And our
Chief Science Correspondent at large,
Doctor Ibrahim Abdullah, to talk about
global AI arms race in light of deep
Sikhs rise from China.
So China is working smarter, not harder.
Does it mean that China wins the A I race?
Not necessarily. The more significant
aspect is what it reveals about the
global competitive landscape. India
provides an instructive comparison. India
possesses comparable technical talent
pools, but lacks structural integration.
First topic, Open AI Nonprofit
Board rejects Elon Musk's shotgun
wedding proposal to bring Open AI back to
its nonprofit root. Here's a quick
highlight of the rejection and some
background information for our audience.
Earlier this week, you, on behalf of your
clients, sent a letter to acquire all of
the assets of Open AI and to do so
imminently, subject to numerous
conditions. It
In any event, your client's proposal,
even as first presented, is not in the
best interests of Open AI's mission and
is rejected. The decision of the Open
AI Board on this matter is unanimous
and for our listeners, Open AI's
mission is to develop artificial general
intelligence AGI that benefits
all of humanity. And one of its board's
key responsibilities is to ensure that
its non-profit route and this mission
statement are carried through even as it
has been steadily transitioned into a
for-profit organization structure. Most
recently, a step it is pursuing is the
conversion of it for-profit arm into a
Delaware's Public Benefit Corporation,
PBC.
So the question is, has Open AI
board essentially gaslighting
itself, twisting its mission to justify
the exact profit-driven approach that was
created to prevent?Sam, what do you
think?Well, Ken, that's the question on
everyone's mind, isn't it?It certainly
smacks of institutional capture, if you
ask me. It reminds me of the Supreme
Court's infamous Plessy versus Ferguson
decision. Where the court claimed racial
segregation could somehow satisfy equal
protection under the Constitution. In
both cases, institutions seem to be
reinterpreting their core principles to
justify a contradictory outcome. But
let's not be too hasty. There's a genuine
tension here. Advancing AI technology
requires significant capital, and
attracting top talent isn't cheap. Ken,
is there a middle ground here that
balances the mission with the practical
realities of the AI arms race since they
won?I mean, they know that for any type
of technology companies,
especially in a competitive
landscape like AI that requires, you
know, a huge amount of capital
investments and talent, etc. the
needs for capital is real from day
one. So it's almost like
they are setting themselves for failure
and want to believe that you know by
setting up this type of golden share and
a safeguard approach could somehow spur
this conversations. At the end of the
day, I think it's a matter of time for
them to confront this. You're right to
point out the inherent challenge they
faced from the start. It's like they
built a dam knowing the river would
eventually flood. The historical pattern,
as you know, is unmistakable. Every
Guardian eventually becomes captured by
the very interests it was designed to
check. Open AI's nonprofit board holding
a golden share was supposed to be the
constitutional firewall between public
benefit and private profit. Now they're
arguing that pursuing profit through a
Delaware PBC structure somehow better
serves their nonprofit mission. That's
not reasoning, that's rationalization.
Could you maybe elaborate a little bit
more why these two things are actually
comparable and you know what happens
then in the Supreme Court case and maybe
that shed light to us in terms of how
this would resolve itself or actually
push the the conversation forward in a
positive manner?That's a great question,
Ken. The parallel I see between OpenAI's
situation and the Plessy v. Ferguson
Supreme Court decision lies in the
reinterpretation of core principles to
justify a contradictory outcome. In
Plessy V Ferguson, the Supreme Court
twisted the 14th Amendment's Equal
Protection Clause to justify racial
segregation, claiming that separate but
equal facilities were constitutional.
Similarly, Open AI's board is twisting
its founding mission of advancing AI for
all humanity to justify its
transformation into yet another. We are
all human and you know, humans, you know,
depending on how to see it, they can say
they are intrinsically evil and logical
or they are just, you know, pure at heart
and just want to do a good thing. And you
know, in any type of decisions like this,
there's no black and white answer, right?
And so the way that I like to think
about it is,You know, has there been a
systematic analysis of OK, what
exactly does this, you know, 5
words, right, like advancing AI for
humanity exactly means like what are the
metrics that they are looking at and how
do they come to the conclusions?Is there
any evidence that substantiate those for
the moments that you know, they have to
really focus on the bottom line and
report to a broader, you know,
stakeholder base?You know the
and the attraction of a much bigger
working populations as
employees, right?The the
decision making slowly
eek out and then, you know, think about
Google, think about Facebook. That's a
crucial point, Ken. Defining advancing AI
for humanity is like trying to nail Jelly
to a wall. It's subjective, open to
interpretation, and easily manipulated to
fit a predetermined outcome. Take Google,
for example. Their do no evil mantra,
once a guiding principle, has been
watered down to if it makes users better,
then the end justifies the means in terms
of data collection. Facebook's connecting
everyone conveniently now allows everyone
to create their own avatars so that there
are more ad inventories to sell eyeballs.
These examples highlight the slippery
slope of mission creep. What starts as a
noble aspiration can easily become a
justification for profit driven
decisions. It's a difficult balance and
one that Open AI's board seems to be
struggling with. Do you think there's any
way to prevent this kind of mission creep
or is it an inevitable consequence of
growth and success?I do
think that there is, you know, there
there are examples that
could could be the counter arguments if
you think about Apple for instance, you
know. You know, everyone thought that,
you know, after after
Steve Jobs, he might just get into the
same band as Google and Facebook. But
then you see how Tim Cook exhibited that
classic authentic leadership, right?And
and his own personal experience of, you
know, coming out in a difficult
situation and then turn that into the
ethos of Apple's, you know, protecting
privacy for others. That's a
that's a very, very high
class act to continue to manage and
maintain. But I'm not saying that it's
easy to do. And you know on the other
hand you see Amazon which you know
that while they are continue to provide
value and you know focus on customer.
A first satisfaction and that's
evidence even at this now and age
that is at the expense of their other
stakeholders, you know, specifically the
labor forces, right. So how do you manage
that is a is a is a class of
combination around authentic leadership,
you know, communications as well as
just you knowBuilding those framework
from within, there needs to be a genuine
commitment from leadership, a clear
framework for decision making, and a
willingness to prioritize values over
short term profits. But let's bring this
back to open AI. Given what we've
discussed, do you think there's any
chance they can course correct or are
they already too far down the path of
prioritizing profit over their original
mission?It is
still relatively early in the development
of AI cycles. So you know,
focusing on growing, getting the best
talent and continue the lead of the
beachhead is net net, I think
the right call, right?And you already see
that like let's just say on on image
creation, the journey is like way
ahead and you see you know,
Musket's AI has a fantastic image
creation. capabilities because the
platform has captured so many images you
know throughout the years. So I think
from my perspective, it's the right call.
The question is around disclosure,
around how are they going to continue to
measure it. And to your point, having
those in place from within and
continue to stick to those missions.
That's a pragmatic perspective, Ken. It
acknowledges the competitive realities of
the AI landscape and the need for open AI
to remain a leader. But it also places a
heavy burden on the board to ensure that
profit motives don't completely eclipse
their stated mission. It's really around,
you know, the consistency or what we like
to call it congruence to actually make
that a reality, right?What you said, you
you set a belief should be consistently
reflected on based on your actions.
Tim Cook cared about privacy. He
expanded it. You know, in Apple's ethos
that are very, very clear examples of
congruence. So, but I want to
react back to a point that you made
earlier, which is while I think that it
is the right call right now to invest
in this talent competition, I am
concerned whether or not they are just
looking at the world with a traditional
Silicon Valley lens of, you know,
getting the best talent to build the
fastest, the meanest, the. you know, the
greatest, most perfect AI,
and whether or not that in itself is
really what humanity needs. I do see what
you mean. As we've seen with how
technology shapes our digital lives,
frictionless perfection often makes the
experience less engaging and less
interesting. There's compelling research
showing that when technology removes all
obstacles, we become passive consumers
rather than active participants. Zombies
scrolling mindlessly through feeds
designed to maximize our attention rather
than our well-being. The most perfect AI
might actually deliver the least human
experience. What a perfect line. You
know, it reminded me of someone saying if
I can be perfect, why do I want to be
authentic?And I always thought the
counter arguments should be if you can be
authentic, why would you want to be
perfect?And that's that really is the
cross of the question, right?Like which
version of the future we want?
And how do we reach that?That's a
powerful way to frame it, Ken. If you can
be authentic, why would you want to be
perfect?It really does get to the heart
of the matter. Are we striving for a
flawless, optimized future or one that
embraces the messy, unpredictable nature
of human experience?The power of AI is so
pervasive that this isn't just an
abstract philosophical debate. It has
real world implications for how we live,
work and interact with each other. And I
think in terms of take away, like the
overarching point is this. You know, even
though none of us really have a lot of
impact and ability to
influence, but as we kind of pointed out
how, you know, pervasive A I would be
in our day-to-day lives. And so this is
the opening bit in my mind of who is
going to control and decide the future
of A I. And you know, if you hear us
as you know, kind of discussing either
you agree or disagree if the version of A
I should be the better for us and meaner
one or should be. Should it be more
humanistic and, you know, more empathetic
to the human experience?Either
case, it is important for you to
have those opinions heard, just
like our most recent election, where the
majority of the population has
finally said enough is enough in terms of
government overreaching. So that's how my
take away is. So Sam, what about you?What
would be your take away here?I agree
wholeheartedly, Ken. This isn't just
about open A I or Elon Musk. It's about
who controls the A I shaping your digital
experience, whether it's optimized for
your well-being or for maximizing
engagement and profit. Look at how
algorithmic decisions already affect what
news you see, job offers you get, or even
how your insurance premiums are
calculated. My take away is this
be critical, be informed, and demand
transparency. Don't blindly trust
institutions to act in your best
interest. Question their motives,
scrutinize their decisions, and make your
voice heard. The future of AI is too
important to be left in the hands of a
select few. When we come back, Doctor
Carmen Diaz and I will talk more around
the authenticity framework and how we
overlay it for the board decisions.
And then there will be a special bonus.
The fireside chat is Sam you're
conducting with Doctor A. Would you want
to elaborate that?Certainly, Ken. In our
fireside chat, Doctor Abdallah and I will
be diving deeper into the global
competition in a I, focusing specifically
on China's deep seek and other
competitors rapidly advancing. We'll
explore the implications of this
competition for the future of a I
development and the challenges of
balancing innovation with ethical
considerations. It's a conversation you
won't want to miss exactly.
So please stay tuned. If you enjoy our
conversations, subscribe. I look
forward to continuing the conversation
after the break.
