RadicalxChange(s)

Frank McCourt: Founder of Project Liberty (Part II)

Episode Summary

In this episode, Project Liberty Founder Frank McCourt joins Matt for a second round to discuss the challenges and opportunities presented by rapidly developing AI technologies. Building on their previous chat about digital infrastructure, they explore whether AI will exacerbate social media, digital advertising, and data centralization issues, or fundamentally change them. McCourt emphasizes fixing the internet’s design flaws to ensure AI benefits society, advocates for returning data ownership to individuals and stresses the need for political engagement to align AI with democratic values. Tune in for this enlightening conversation and what we can do moving forward.

Episode Notes

In this episode, Project Liberty Founder Frank McCourt joins Matt for a second round to discuss the challenges and opportunities presented by rapidly developing AI technologies. Building on their previous chat about digital infrastructure, they explore whether AI will exacerbate social media, digital advertising, and data centralization issues, or fundamentally change them. McCourt emphasizes fixing the internet’s design flaws to ensure AI benefits society, advocates for returning data ownership to individuals and stresses the need for political engagement to align AI with democratic values. Tune in for this enlightening conversation and what we can do moving forward.

Links & References: 

References:

Bios:

Frank H. McCourt, Jr. is a civic entrepreneur and the executive chairman and former CEO of McCourt Global, a private family company committed to building a better future through its work across the real estate, sports, technology, media, and capital investment industries, as well as its significant philanthropic activities. Frank is proud to extend his family’s 130-year legacy of merging community and social impact with financial results, an approach that started when the original McCourt Company was launched in Boston in 1893.

He is a passionate supporter of multiple academic, civic, and cultural institutions and initiatives. He is the founder and executive chairman of Project Liberty, a far-reaching, $500 million initiative to transform the internet through a new, equitable technology infrastructure and rebuild social media in a way that enables users to own and control their personal data. The project includes the development of a groundbreaking, open-source internet protocol called the Decentralized Social Networking Protocol (DSNP), which will be owned by the public to serve as a new web infrastructure. It also includes the creation of Project Liberty’s Institute (formerly The McCourt Institute,) launched with founding partners Georgetown University in Washington, D.C., Stanford University in Palo Alto, CA, and Sciences Po in Paris, to advance research, bring together technologists and social scientists, and develop a governance model for the internet’s next era.

Frank has served on Georgetown University’s Board of Directors for many years and, in 2013, made a $100 million founding investment to create Georgetown University’s McCourt School of Public Policy. He expanded on this in 2021 with a $100 million investment to catalyze an inclusive pipeline of public policy leaders and put the school on a path to becoming tuition-free.

In 2024, Frank released his first book, OUR BIGGEST FIGHT: Reclaiming Liberty, Humanity, and Dignity in the Digital Age.

Frank’s Social Links:

Matt Prewitt (he/him) is a lawyer, technologist, and writer. He is the President of the RadicalxChange Foundation.

Matt’s Social Links:

Episode Transcription

Matt Prewitt: Okay, Frank McCourt, thank you so much for joining for a, for a second conversation.

Frank McCourt: It's nice to be with you again, Matt. How's it going?

Matt Prewitt: Doing pretty well. so by way of orientation in our last conversation, we talked a lot about, about infrastructure about the way that, that the. Prior generation of digital infrastructure has either failed or succeeded and, in some ways, perhaps, deviated from the pattern of older infrastructure building, like roads. And I think that, we both wanted to talk a little bit further to extend that, conversation, into the question of the day, which is, And, how, will the, how will these new quickly developing AI technologies, change the kinds of problems that, we have been worried about with regards to social media and, Digital advertising and the way that the digital infrastructure that we have had around us for about 10 years now, is affecting the public.

so there's obviously a lot of, a lot of very specific questions that we can get into with regarding, with regards to AI, but I'd, love to start with just your general thoughts. does, Is AI likely to be a continuation or an exacerbation of the kinds of issues that have worried you with, about social media or, does it change the nature of the problem?

what are your feelings about it?

Frank McCourt: Yeah, I, I think this is, a really important issue for us all to, to grapple with here because generative AI has become this kind of shiny object, which is attracting a ton of attention. And. as it should, by the way, but not in a way that it is, distracting us from the, basic design of how tech is working right now, right?

We talked about that in our last session that, not just social media platforms, but all the large platforms are basically, scraping data from individuals, aggregating it, applying algorithms, which are predictive tools, And, and making these algorithms make judgments about us, And, our behaviors and our, tendencies and what we're going to do with a piece of information or how we're going to react to a piece of information or how we're going to get triggered by one thing or the other. and, it might be as benign as buying something, but it might be a lot more.

important than just buying a new, a new device or a new, piece of clothing or something like that. So this entire tech, the internet, again, let's focus on it. evolve from a decentralized internet to a highly centralized one when the, at the dawn of the app age, and we've had now, 15 to 20 years of, that.

Centralization of the power of the data and of the power and, and influence that creates into these, five large platforms that, basically are the, if not the only ones, certainly the primary ones to have access to both massive data sets, which these large language models. are trained on and huge compute power.

And remember it's the, massive data and the mass and the massive compute power that are necessary for these large language models to operate. Because. They too are predictive, the, these large language models are predictive technology, right? the, lots of information is crunched and the next most likely word is spit out.

And then the next most likely phrase and then sentence and paragraph, et cetera, et cetera. So again, predictive models based on a ton of information and the ability to crunch all that. And, and by applying sophisticated, algorithms or AI to, in this case, large language models to this, data.

we get the, we get this new age of generative, of ai, but there's a through line here, right? That this is...data, for the most part. Now, this is data information that's produced by individuals who are whose data is being either, scraped from them because they're, using a smart device, quote unquote, y'know, a television, a dishwasher, a refrigerator, a car or whatever.

That's recording all kinds of information or they're engaged more actively. with a, computer or an iPad or a smartphone and they're engaged in, in, usually it's in social search or shopping, right? That's, those are the big, ways that, that, individuals engage. And so this data is being created at massive, scale.

And being grabbed by, a few large platforms that then convert it, train the data, and create these, chatbots and so forth. And it's, the same model, right? The same architecture, which is gather up people's data, aggregate it and apply sophisticated, algorithms or AI or whatever you want to call it to, to it.

and for whatever result is being optimized for, so it's not like we have a broken Internet that is all going to suddenly be fixed by a more powerful version of technology. As a matter of fact, we, I think we have a broken Internet and the problems are going to be exacerbated by a more powerful version.

you'll have. better misinformation and better disinformation and better deep fakes and more hallucinating by the large language models and so on and so forth. And more harm can be done and more chaos and confusion can be created. Now, having said all that, the generative AI is pretty awesome, just as the internet is pretty awesome.

And if we, if we had a better design, then we could eliminate a lot of the problems, but we already know what the problems are because we're seeing and experiencing them pre generative AI, Or pre the, emergence of the term, cause it's a marketing term and pre the delivery of a user interface, and, but the model is clear in terms of how it operates and The good things it does on the one hand and the bad things it does on the other.

And I would just say like with any flawed design, the, logic says fix it before you make a more powerful, fix it before you, you put it under further stress and, and, so on and so forth. we talked a little bit about physical infrastructure in our last session.

it's been all kinds of physical infrastructure that's been built in this country. Thank you. That is, was better than the alternative, but flawed and when the flaws were, once it was pressure tested with scale, the flaws became apparent and then it was redesigned, reengineered for safer, better, bigger, more effective infrastructure.

And, I think we can, learn from that. We have flaws in the design, which are now very apparent. We should fix them and then have the benefits and, wonders of generative AI and, solving problems at scale rather than creating problems at scale. So I'm all for AI and all for the tech.

it's just, I'd love to see it fixed before it gets, it becomes more powerful and unwieldy. And I think by the way, Matt, that all comes down to what we talked about before. Let's put the power back in individual's hands. So that they own and control their data and. you and I can have a say as to whether our data is used in a particular large language model or not.

And if it makes somebody a jillion dollars, what's, what's the deal? and, we have the ability to say, no, I'm not comfortable with it. you're using my data because you haven't proved to be trustworthy or I don't like, your standards of operating.

And I'd rather let my data be used in some other, some other platform, that is, more in keeping with my own belief system. And, my own standards and so on and so forth. So it, all comes back to agency and we've lost agency in the current world. the pre generative AI world.

and I think we need to return agency to individuals first, before again, we rocket ship into this next, version of the technology. And, as we said in the last session together, it just, it's if one believes in the principles of individual liberty and freedom and rights and autonomy and choice and, agency and so forth, we don't have that now in the app era of the internet.

So let's fix, let's address that, fix that, and then, yeah, and then enjoy the benefits of, rapidly advancing technology, which I think will be pretty awesome if we get the design right.

Matt Prewitt: Yeah, I want to get into the design a little bit. I'd love to, I'd love to, double click on that. but I have a slight digression.

I just wanted to share with you, which is, I read something recently that made me think of our earlier conversation about infrastructure, which was, so I was reading about the, the collapse of the Khmer empire. So the civilization that built Angkor Wat and, what they, they thrived, 800 years ago or so, because they built, the best network of, Roads in the area, they built this fantastic network of roads.

and this enabled them to trade and so on and so forth. They also had this great infrastructure around water and things like that. But what happened is when they, when the empire kind of fell on hard times, and they weekend the, they collapsed quite quickly because the roads that they built.

were used by their, their enemies to, to move the, move the troops, into the part of the Khmer Empire really quickly. I just thought that was interesting. There's this kind of, so there's this sort of, dual aspect to infrastructure, that came through just in that little anecdote that made me think of, of the conversation that, that we were having.

And it also makes me think about, about, about AI, right? Because if we, think of, if we think of AI as, infrastructure, it's, important for us to think about how, not only what it can do for us under ideal circumstances, but also what vulnerabilities it creates and what sort of, faults that it might exacerbate in the, in the society so that it's serving us, not only on our good days, but on our bad days,

Frank McCourt: It's a great, it's a great one, because, we have right before us right now, the whole TikTok situation, how quickly, Congress moved because of the national security threat of, this two way, two way street, between the Chinese government and American citizens, right?

That, that is, providing something to American citizens that they, like and enjoy, but at the same time is providing a pathway in for China. And, I think we need to have a bigger discussion about that because this is the same tech that our own big platforms are using, except that the data is going to Silicon Valley, not Beijing or Shanghai.

And I get that's a big difference. But nonetheless, I don't think we want All that of our personal information, being centralized and aggregated anywhere, quite frankly. And, and, that's goes back to the point that if this is going to be, we're just going to mark, march forward or race forward without, thinking about how to make the technology better.

We're just going to default into a place where five companies have the data and the compute power, and they're going to be the beneficiaries of generative AI. yes, apps will be created or, that will be, or use cases or, chatbots, there'll be things created that you and I can use, but it won't address the issue of who the data that's collected.

and, that continues to power this machinery, who will own it and because again, it's the, data that is fueling this and particularly the, human data because these predictive models are based on how humans think, not how, machines think, right? or animal, other animals think it's based on human thought processing and, human preferences and human emotions and human tendencies.

So all of this information that's scraped from the internet is the live data that's going into these models that are being trained. and there's only a few places. That currently can benefit from all this right because they have the head start with having aggregated a lot of data and having built up a lot of compute power, but I would argue that this is not private infrastructure.

This is at some there's some portion of this has to be public infrastructure that we all benefit from and have a say into how it's used right and how it's operated, not be, reliant on. an individual CEO or, board of board members of one of these companies decided what's good for all of us.

And worse yet, one of those company individuals, you're making some kind of deal with a political figure and giving that political figure all kinds of power because they by extension have the ability to use a platform to influence and sway and convince and propagandize and, so on and so forth.

So again, this is, when I think of generative AI, I think this just, this is a bigger, more potent, more powerful version of what we have. And we already know that what we have is flawed, deeply flawed, right? You don't even have to be an individual to be on the current version of the internet, right?

you can participate by being a fake person, multiple fake persons, a machine bot, et cetera, et cetera. identity is just one example of, a flawed infrastructure that needs to be addressed before we get to the, next level of, of, power and of, capacity. And again, the technology could be awesome and really be incredibly beneficial for humanity, which is what the original internet was designed to be, to make us all smart.

And, create access to information and connect us and so forth. It's become something very different, which is going to lead to very, different outcomes and ones that I'm very uncomfortable with, which is having all this power centralized. The power derived from the information and, the influence and the wealth and so forth, that, that creates, that, then becomes a cycle of more influence, more wealth, more influence, more wealth, more power, et cetera.

Yeah. And that's exactly what we fought against to become this country, from a colony to a country and, and from subjects to citizens and I, don't wanna see a slip backward here.

Matt Prewitt: when I hear, when I listen to the AI conversations, of which there are, many, obviously, the way that I sometimes categorize it in my mind, or just 1 little framework that I have for it is I hear, I often hear basically monopoly concerns being pitted against safety concerns.

Being pitted against geopolitical concerns, right? So there's these kind of 3 buckets and I'm curious how you think about these 3 buckets of concerns and in particular, how you think various different interventions. Interact with them. I'd love to hear your thoughts about two things in particular.

the 1st 1 is, open source. So how do you feel about open source models? do you think that is, does that address the concerns that you have with, our concentration and monopoly? and Let me set that up. let me set that up. the next thing I want to hear about is, is, data ownership and, decentralized, social networking protocols and systems for people to maintain control over data, how that could interact with AI.

But let me, Let me set up the AI question just a little bit. or sorry, the open source question, because it seems to me that there's a, like at a first pass, if you think about open source AI, it seems like a, it seems like an intervention that mitigates the monopoly problem, but could potentially exacerbate some of the safety problems and, or, some of the geopolitical.

concerns, right? and by geopolitical concerns, just, just to get it out there, explicitly what I mean by that is basically, is China going to get there before the United States in terms of, building a super powerful AI. that's what I mean. That's all I mean by that. and, anyway, but, open sourcing models.

Seems to me, it seems like a potentially an interesting. intervention or good intervention against the monopoly dynamics. but it's worth remembering this, there's this kind of ironic history here in my view, because open source data 20 years ago, seemed like this force that was very powerfully, fighting against monopoly, but, but in the last 5 years or so, open source data sets have been one of the tools that have enabled.

Just a handful of companies who have the, the expertise and the compute to build these models to, to zoom very far ahead. So there's a way in which, sometimes things that, things that seem open at 1st can get captured in hard to foresee ways. Down the line, I just think that's an interesting history.

and, it's hard to articulate, what, how open source models might go, in the next sort of generation of the technology. but anyway, there's a lot there, but I essentially, I'm curious to hear your thoughts about, about whether open sourcing models is a good idea.

how your ideas about data control, can, we'll end up moving power around and how it interacts with, monopoly safety and geopolitics.

Frank McCourt: Yeah, a lot there, and I'm glad you mentioned, data ownership, because I do think, and this is, my perspective on this and I welcome, other perspectives because I think, this whole, we know, or many, I think would agree that the current situation is less than ideal and we're entering a new era where we shouldn't just be winging it and hoping it works out.

We, we should be reflective right now and really fix the design so that, as I said earlier, when it becomes more powerful, it's more powerful for good, right? For humanity. And, and of course we need commercial models to sustain it. I come at this from the vantage point of how do we best have the technology, embrace, And reflect the ideals and principles of a free society.

and, this kind of, capitalist democracy that, that, has been the, those are the two operating systems that work together to have built, America and, to one extent or the other, free societies. And, so that's, a point of view or starting point that I'm coming forward with.

That others may disagree with. I happen to think that this is a model worth saving, and improving, right? And strengthening. so it becomes, more just, more fair, but based on the principles of the worth of the individual and, the importance of, rights that are unalienable.

Rights that only mean something if you respect them in other people. So the social contract and, a meritocracy where everybody is, it has the same chance to succeed, which is very different than everybody gets the same result, right? So it's a, it's foundational in terms of my, perspective on this.

And so I would love to see the technology that we have reflect those principles in those values. and. I think as we talked about in our prior session together, if I asked you to describe democracy, you probably wouldn't say centralized autocratic surveillance based predatory exploitive, And, it, we have words like liberty and choice and autonomy and freedom and social contract and, fairness all come to mind. When I zoom out and I see what's happening, with highly centralized technology, I see how an autocracy like China will use it, right?

Because they, I don't, think of Chinese. people as citizens in the same way as I think of Americans as citizens. I see them as, people that are proud to be Chinese and are, are, living there and so on but they don't come with the same rights that we have in the United States, right?

So they're, they, in my construct, they would be more like subjects, right? And, and the government there is very explicit. About its, expectation that everybody toes the same line and that they, the people, Chinese people are surveilled and if they might get punished if they're not carrying the party line, et cetera, et cetera, et cetera, I see that as very un American.

And so rather than, replicate highly autocratic surveillance based technology and compete with China that way, I would rather see us create technology that actually embraces our core values and principles. And I think that, that starts into a, in large measure by returning ownership and control of our personal data to each of us.

And, not scraping it and stealing it from us and then using it in ways that we don't, we're not even aware of. I think that goes a long way towards building out technology that where people feel empowered by the data, not frightened by how it's used or feel exploited. By how it's used. And, we have a whole regulatory framework that's being built out to what protect people, right?

It's a, it's data protection. that implies. That they could be harmed by it, right? If you need to be protected, there must be some harm that's being caused. why not shift the thinking and let other people have that autocratic centralized form of technology? Why don't we have one that's empowered where we empower our citizens and empower each of us to feel like we're part of the world?

The creation of the next great version of technology to, to thrust humanity and civilization forward. it's a, it's very doable. And I actually believe if you create a, an internet with integrity and where people are back in the driver's seat, they'll share more information and, do it in a way that the information that, that it's being put in, into these large language models is, is highly calibrated, is very accurate, is very clean.

you don't, you won't have a very high fidelity. You won't have the, noisy data we have now because you have so much garbage, that's, that's, echoing around the internet. And then, you have technology that kind of embraces values and enjoys the benefit.

Of the power of this, massive tool, but it's built on a different set of principles and those principles are what has allowed the United States to become the dominant, meaning biggest economy, best version yet of, democracy that, that humankind has seen, imperfect as it is.

And best version of capitalism that it's seen imperfect as it is, but we should do better in terms of fairness and justice, but we're not going to do better. I don't believe by taking a radical pivot to autocracy. And so I just feel like the technology is so powerful that if we don't have an embrace.

this idea of agency and liberty and choice and, in, in the power of the individual that we're going to end up competing with, China. And I don't think we'll ever outdo China at being China. I don't think we'll ever, we might surveil as well, as they do. And, but I don't think we'll, be any better at it.

and And they also have a population that is conditioned to being, towing the party line. We have a population that's conditioned for freedom and liberty. And I just think what's been our secret sauce are these value core values and principles, and why not embrace those as we build powerful technology.

And by the way, I think if we do that, we will unleash. Massive, invention and innovation and creativity that China will never unleash. And, with their model and, we'll have the best, freest minds operating here, and building out the alternative to a centralized surveillance based model.

Technology and I think that would be epically, wonderful because it will open up all kinds of economic opportunity for people and innovation that we can, only sit here and imagine. And so I think that's for me, the key to all this is to go to the very basic, like what are our values?

What's the values proposition, not the value proposition. After you settle on the values proposition, now you can talk about incentives. And economic structure. And you said that, the, our current model kind of took a track that was a bit different than it was quite a bit different than what it was originally designed to do.

And we're sitting here with these large platforms now that are the ones that have the data and the compute power that happened because of incentives that happened because of an economic model. That, and an attention economy and ad tech industry that fueled these platforms, to, connect consumers with products, And to sell ads. And that's that incentive model to show you how powerful capitalism is and how powerful incentives are. You just have to look at how a decentralized internet became highly centralized because of, incentives. So let's not lose sight of incentives here. We're going to, Open source, I think, is necessary, and important at a certain place in the stack, the tech stack.

There will be use cases which will, people will, creators, builders, entrepreneurs will build things to create value. And let's, have them incentivize correctly and certainly do it with permission. If somebody's going to use my data, I'd like to give them permission. And, that's why I said in our last session, let's have an internet where we're, the new platforms are clicking on our terms of use.

And if you want to use my, you saw, you see the New York Times copyright lawsuit, against these, platforms, maybe New York Times would be very happy to have their information used in some of these models if it was permissioned, not just scraped, because right now everything has just been ingested by these machines.

Regardless of who created it or who owned it or how valuable it is. And it's just a, okay, we own it now. We're going to go build stuff and profit from it. that's, not right. that's not fair. And let's just, catch our breath here and build it in a way where people give permission to use it.

That they're going to, people are not going to not let their data be shared. we're not going to. If you and I are asked to use our data to cure a disease, we're going to say, sure, you can use it, you can use this piece of my information for that purpose only. You can't resell it and you can't have everything about me just because.

you want some piece of, medical information or biological information. And the social graph is rich in all kinds of information and we, should have some ability to parse it out the way we want to and for what purpose. And, maybe there's another piece of my information that's only going to be used for somebody's commercial purpose and I'm happy to let them use it.

for an economic, trade. and again, all that is, doesn't even get to the, point of, am I sharing information that's connected to me as a person with a, name, or am I sharing information that's connected to a person, a verifiable human being, but I'm not giving up any of my privacy.

Very different. very different deals, right? Very different, transactions, so to speak. yeah, I think, the privacy of who owns and controls the data and decides how it's used is really the key to all of this, Matt. And then incentives will be another key because we need to build, We want to have the best AI and the best technology to help people, but, if we unleash the creativity, we'll build something awesome.

I'm totally convinced of it. There's a lot of very, smart people out there that would prefer freedom and liberty to, autocracy and, a dictatorship or any other centralized Political system. when Kennedy said in the sixties, we're going to the moon, that, that got a lot of people excited about figuring out how to do it, cause he didn't know how to do it at the time, but he said, let's do it because this is a new frontier and we don't want it to be, we want it to be open for humanity, for civilization and for learning not to be controlled.

By someone else. And I think we're at that kind of moment again, where we can build some great stuff but it was the, values that Kennedy brought forward in that speech. It wasn't just the tech that he brought forward right the tech hadn't even been built. He was talking about a set of values and aspirations and an ambition for human civilization, and then the tech, got built to enable it embracing those values, and that's what I'm talking about here.

Matt Prewitt: Yeah, if you take the value of, autonomy, what strikes me is that, there are, a couple of uninspiring visions, for the future of autonomy and I think that what you're arguing for is, The articulation of a different one. And the, two sort of uninspiring visions are, so on the one hand, the elephant in the room here, I think, is that AI is a very powerful technology for manipulating people, basically.

And, you, on the one hand, we've got a vision of the future in which. An autocratic government can use this technology to manipulate its subjects. And on the other hand, we've got a vision of, of basically handful of private actors using it to manipulate. Manipulate markets, manipulate consumers, these kinds of things, right?

So in other words, there's these kind of there. We've got an autocratic and a capitalistic, dystopian vision about how this technology can be used to manipulate people. And, I think the question is, is how can we imagine it? How can we imagine this technology being used to enhance our autonomy?

And, I think that, that there, there are different sort of ways in which that might work too, but what you seem to be saying, and correct me if I'm wrong, is that if we all had a bit more control over how our data was used, Then the terms that we would be able to impose upon the users of our data, the people building models with our data, would, would be such that they would determine the sort of downstream uses of the technology in, a better way.

So that, for example, all of the providers of data to, models could, could join together and, impose terms on model builders that would ensure that the. Models are, not being used deceptively, not being used manipulatively these kinds of things.

I'm trying to mirror back what I'm hearing. is that right? Is that how you see it too?

Frank McCourt: Yeah, no, I, it's totally right. And I, the only thing maybe I would tweak about what you said is that I don't think it's quite as, binary as you, you said there are two models that are ones that we don't want to replicate or perpetuate.

And it's the one that kind of manipulates us as people, And we're, we lose our citizenship and become subjects and that's a political. dynamic. And the other is, and, that's you talked about China in that context, right? And the other is a corporate model, not a government model or communist party model where the data is, the advantage of a few handful of companies and they, you have incentives that kind of are, drive us to do things economically, so it's a marketplace.

Kind of thing. I, my one tweak is we, see, I, see it as more than just the corporate version as more than just manipulating or, optimizing for an economic incentive models. I, see it actually now having spilled over to actually well beyond the economics, marketplace issues into the, into our culture.

And our society and our politics, and it's affecting our ability to operate as a democracy and so forth. Because the incentives are, not just economic, the incentives are also for, rewarding the most extreme. and I say behavior deliberately, because it's not only most extreme viewpoints, it's, behavior because when the like and follower buttons were added.

This kind of surveillance based technology, which was accumulate information to sell ads, to sell ads to sell things went into overdrive. Because it became a performance based internet and it was how do I get the most likes or clicks or followers and that was the most extreme behavior or position or whatever.

And. one only has to look at, pick a, moderate politician that makes sense when they talk, they may be right of center or left or center, it doesn't matter, but they're moderate and see how many followers they have and then go to an extreme politician on either side of the spectrum and look at how many followers they have.

And there's just, No comparison. All of the noise and the, and is at the extreme edges. And that's, that tends to get, Then, the media picks that up because the media is, driven by this search engine optimization model, right? Where, if the stuff that's getting picked up the most is the extreme stuff on either side, that's what gets reported on, amplified even further.

And then we become You know, anybody in kind of the middle, I don't know what the percentage is, but just take it, take 70%. It's, in the middle and 15 percent at the extremes. That's not science. That's just, I'm trying to make a point. It's, that 15 percent at either end that may not be at all reflective of the 70 percent in the middle, whose viewpoints may be more nuanced.

They may agree with some things, disagree with others. They have their own personal curated perspective on life and what's appropriate and what isn't, but they're. What they're getting is, this extreme behavior on either side and that gets amplified even by conventional non, social forms of media, just, traditional media and, traditional TV, traditional newspapers and all that stuff.

And it's just this, so I, my only tweak to what you say is I think this. model, which got further deteriorated with the like and follower buttons where it became performance based and rewarded, extreme behaviors, by the way, including things that weren't even true.

there's no like reward for truth or reward for discovery, or reward for a better idea or reward for advanced reward for advancing. civilization or strengthening democracy or protecting kids. The reward system is for extreme behavior and that includes lying and, making, stuff up and, and just, it's almost a, yeah, it becomes a performance based internet and, that is, that's entertaining for some, I don't, know, but, it's not.

Conducive to the types of things we've been talking about here, which is how do you actually use technology as a force for good, a force and good. I've already given you my perspective on what I think good is right, which I'm trying to it to be a force for a, fairer, better economy and a more just and stronger, democracy and, all of that.

All of that entails and I just don't think we have technology that's doing anything to help right now. And yeah, so why make it more powerful before we fix what's wrong if we agree that these are noble goals and worthwhile goals if we, if it's all about, I see this book in the roadmap right where we're, we could turn left and follow the machines and the technology and say, we're, we got to go build the best AI and the best technology regardless.

or there's a path to the right, which says, let's not, forget about our, what makes us special and what's made us successful, and what this country has been about for the last 250 years. And let's take that path forward and use the technology, build technology that's every bit as powerful as we're going to, we're going to get if we took a left turn or the left fork, but it's going to actually succeed.

Also, make our, we're not going to give up our, self sovereignty, right? We're not going to give up our agency. We're not going to give up who we are just so that, I think it's time that we have a, an IP address for individuals on the internet, not, just an IP address for devices.

I think it's time that we have our, that we show up as individuals on the internet and, and, control us on each of us on the internet. that's what agency is about. You can't have agency in a, we're, not going backward. The tech is here. The internet is here. Could be great.

not working great now. and one of the big, one of the big misses here is that we, let ourselves become devices. And we, if we want agency, we need it. We need it. We need to take back control, take back power of our data. And then have this awesome internet, kick into the next and, tech kick, kick into the next gear.

But, we're seeing the power of this, of the internet, the power of data, the power of our social graphs, the power of technology, generally the power of generative AI, all of it is pretty amazing, but let's, have it embrace the right values, or the ones we agree on, and then let's have fun building, again, beating the competition, but with our secret sauce, and with what matters to, to, to, certainly what, matters to me.

Presumably yourself, because we're having this conversation, which is, individuals matter, we matter, and I'd rather be a citizen than a subject, and I don't want to give up my citizenship just to use the, the internet or have the benefits of generative AI.

Matt Prewitt: Yeah, yeah, I couldn't agree more.

And the, it strikes me that what we need to do, to, to avoid. This technology being used in ways that won't make us happy is we need to say what we want. So these sorts of the kinds of values that you just mentioned, if we for example, if we want AI to serve. truth or.

Or discovery of new technologies of one kind or another, or, these kinds of things. We need to, we need to put down those markers in a way, right? We need to articulate them. It seems to me that if we don't articulate what we want the technology to do, then the technology will, it certainly has the ability to do the same kind, to create, to serve the same kinds of incentives that, That, for example, social media systems have served.

for example, we. We seek novelty in a way, right? As consumers or as, as viewers of entertainment, right? We want to see something new. We want to see something different. We want to see something that stimulates us. and, this creates an incentive for, Content creators on social media to say things that are more extreme that are more inflammatory. I can do that too. And I can also, create, create content that, that scratches those same kinds of itches. If we don't want to do that, we have to say what we want it to do and point it in a particular direction.

And, what I, what I spend a lot of time thinking about what I'm really curious about is, how do we lay down those markers, right? How do those kinds of those markers, those values that we think the technology should be serving? How do we, how are they going to emerge?

Are they going to emerge? Through, and, what one way that I'm in, that interests me that I'd love to see them emerge is, for example, if people had more power over their data, if people had more leverage or more control over the way that their data is used, could they.

could they deliberate about what they want it to be used for and, lay down those markers about how the, data can be used in a way that, that, downstream actors building models with their data would need to respect, for example, can those markers be laid down through, another model is, those markers get laid down through regulatory processes, at, the government level.

there's also, I think one interesting. Example here is the work of my colleagues at the collective intelligence project, who are, running, innovative democratic processes, that articulate values and then try to build those values into the sort of constitutional AI structure, in anthropics models, right?

So this is another way of articulating what those values are and trying to make sure that the AI systems we build are pointed towards them. so what I'm asking is, how, do you think, the difficulty, of course, is agreeing. The difficulty is, how do we all see each other and, have a conversation, have a deliberation, take a vote, go through some kind of a cultural process in which we, articulate these things, how do we get to the point where we've got enough, consensus or enough legitimacy around a few values that we can, Plant them in the ground in, this sort of clear and authoritative way that we probably need to and ensure that that we're steering towards them.

how do you think that, what does that look like?

Frank McCourt: Yeah, it's, I, think that you're circling around now, the set of issues. That are really front and center as we

re reinvent, reinvigorate, reenergize democracy and, and, not romanticize about it, but actually, reenergize it. And, couple things. I love the way you set this question up with. It requires each of us to engage and express our viewpoint on, things, right? Being a citizen actually comes with a set of responsibilities. You can be a very passive subject. And so for me, if I think of subjecthood, I think, passivity fits right into it as one of the characteristics.

If I think of citizenship, I don't think of passivity. I think of, being active, responsible, accountable, right? Each of us are accountable for, our own behavior in a democracy. And the next point is, when you think about it, democracy isn't about getting your own way all the time. it, that will seldom or never happen.

It's, about, as you say, reaching consensus on this is the best way forward at this moment in time for all of us. And, staying true to what we are at the core. And, that's why the, thin layer code or protocols or founding documents, the constitution, the bill of rights, even the declaration of independence, these are very, deep on the.

The proposition deep on the values and the, principles and very short on the prescription as to how to, implement those or, how best to, to do it, that's left up to us. And. the founders knew that they were going to be around for such, for a period of time and they would give it their best shot, but then some, the next generation would have to give it their best shot and so on.

And and here we are, it's here we are. And it's simple. We can just sit back, be passive, allow ourselves to be subjects, and let the machines do their thing. we can all be, sit there and say, aren't these machines awesome? And wow. or we can say, wait a second. Yeah, the machines are awesome, but they should be, we should be the master of the tool.

And I want to be a citizen. I want to, and I'm willing to do what it takes to, to citizen, to think of it as a verb, not just as a noun. And, that means, I'm gonna, I care about the future. I care about the next generation. I care about my fellow man and woman. I care about, my family.

I care about my own life and, it's worth investing in these things and some will invest more than others. I, get that. It's not, asking everybody to make it a full time job, but it is, we do need to move away from just being entertained by, the internet and becoming.

passive, because of it and because of its design, which is addictive and, does, get people spending hours and hours, online and, in, in let's get the light bulb to go off and say, we can have all the benefits of this and, we can be entertained, but at the same time, we could be smarter.

And we can be more engaged and it can be more fulfilling and fun and we can actually build the future that we as individuals feel more fulfilled by. And we're actually doing something good for each other and for the next generations. And, without that kind of, perspective of the future, we might as well take that left fork and just follow the machines.

because the right fork does involve. showing up, it does involve being, being a citizen and expressing a viewpoint and, but also listening to others and coming at it from the perspective. I think everybody should come at it and say, what do we all agree on? Do we all agree that we're worth something that we prefer to be citizens and subjects?

Okay, good. Do we all agree that we're equal, and should be treated that way by, by others, and by the law and, so forth. Okay, good. Do we all agree we should have the same opportunities in life? Okay, good. do we all agree we're smart enough to govern ourselves? And we don't need to have, somebody be appointed to, or because they're born into a family govern us, we can figure it out ourselves.

Okay, good. et cetera, et cetera, do we all agree by the way that if the rights are going to mean anything to each of us, we got to respect them and other people, okay, good. We don't have to agree on much more. we just have to agree on the basics. That we're taking the right.

We're going to take the right, the folk in the road to the right and figure this out together based on a set of, principles where we're each worth something. And it's a we're each worth a lot. And we are equal. And we're, we have ideas. And we have, everybody carries their own dream forward.

And let's. Let's express that. Let's amplify that. Let's give people that excitement that comes from the opportunity and the possibilities that, we're all driven by as human beings and the relationships and the learning and the getting smarter and the, acts which make us feel good about ourselves and the acts of others that, When we see them happen, we feel better about all of us, and let's embrace that and have technology that enables it.

And, the last thing I would say is I get nervous when we start thinking the technology itself is going to do it for us. that's, the one part I, the, thin layer legal code or, governance code that's in the constitution and the bill of rights and so forth.

It doesn't do it all for us. We have to figure it out ourselves, consistent with a set of principles and values. And, that's a, that creates an obligation on us to sort through that. And, I don't think the thin layer code and, computing code is going to figure it all out for us. I think it can, it can do things like give us ownership and control of our data that it can do.

It can give us agency. It can give others, the same, and it, but it then comes with us having the burden of figuring out the rest and how we want it to be used and what's, and, but generally people are, I find, if you take away the, triggering and the manipulation and the brainwashing of people, and you just, get back to basics, people are good.

For the most part, people want to be smarter. People want to make more money. People want their family to be taken care of and safe. People help their neighbors when they see that they're having a problem. People do a lot of great things for one another, notwithstanding all the, all the bad stuff we read about and see around this.

But why amplify the bad stuff? Let's amplify the good stuff. Let's, right now with a poorly designed technology, it becomes a tool for people wreaking havoc. it's bad actors both outside of the country that want to disable our country and bad actors inside the country that would have whatever motivation they have, can use it as a tool to do bad things.

You and I aren't getting off the Zoom call and going to go out and figure out how we can use the internet to do, to undermine, to harm kids or undermine democracy or create, more misinformation. But there are people doing it full time, and. Because it's been weaponized, because it's not designed.

It's it goes back to your, Khmer empire example. Those roads can be weaponized and, and become the actual device that overthrows the empire. And, I think we're at one of those points right now where. something built for good and to, help us advance not just as Americans, but as humanity and as civilization, believe me, I've talked to the people who created the internet and they did it for, they did it for humanity, they didn't do it for money and, and they, did it because they saw the potential and the possibilities and they're saddened by what it's become.

And I think we owe it to them as well. To fix it before it's too late.

Matt Prewitt: So I think I've got a sort of a hard question here, but I want to ask it what it is I think, you're, articulating a vision for the future of technology, which is really quite macroscopic. And the question that occurs to me is whether you think it should be a.

Political question, because I, and on the one hand, I think that, me and many other people who work on these issues are often tempted to say, that we're all on the same side here, right? That this is that we're all These questions about the future of technology concern everyone.

They bridge across political divides. They certainly do bridge across, conventional political divides, like left, This kind of thing is, I think, quite orthogonal to that. But it does seem to me that it seems like it might be useful for us to be having conversations at a, National level say about the future of technology about, do we want to go this way?

Or do we want to go that way? And, and I wonder if it needs to be somewhat politicized because, because the reality is that there are two, there are forks in the road here. There are different ways that we could go. And, do you think that in order to, in order to really coherently pursue this way and not that way, we need to define that difference and get make it part of the conversation on a political level.

Frank McCourt: Yeah, absolutely. Absolutely. We do. I think that it's, essential to the outcome here. it. It's why Project Liberty has a third leg to the stool. It's why, a lot of people want to focus on the technology because it's interesting and it's such a big part of our lives now. And so they focus on the tech track of Project Liberty.

And, to, to some extent they focus on the, all the work that's being done in the policy and governance and research areas, we're with our academic partners and, so on and so forth. But really it's the third leg of the stool that I think is the key one. And that's the, movement or, campaign leg of this, I don't know, can you think of a bigger issue that, in, in, in the day and age we live in, I cannot, no, nor can I.

And so shouldn't our national political discussion engage in the biggest issue of the day? I certainly think it should, and I'm disappointed that it's not front and center In the national political, discourse, and debate and I think it's a miss, and, sadly. A lot of the time and energy is being, because there's no discussion about the biggest issue of the day, which might have the potential of actually driving the conversation towards a more hopeful future for all of us.

The vacuum is being filled by a lot of stuff that is irrelevant. To our lives, and in the scheme of things, it's, it may be relevant and I'm not, don't mean to be commenting on any of the specifics, just in general, our, days are full of news, quote unquote, that is anything but, on the topic we've been talking about for the last minute.

It's, it's about stuff that people have already made up their mind, driven by the technology that we talk about, to, pick your side and, and then get fed information, whichever side you're on to reassure you that you're correct and to give you the evidence, quote, unquote, that you're correct.

To argue with someone on the other side as to why they're wrong and you're right, while the person on the other side has the, evidence, quote, unquote, to prove that they're right and you're wrong. And so we, sit in this endless argument about, not about the biggest issue that confront, confronts us rather about it.

we're making ourselves smaller, not bigger, and that's never a good thing. I think it's, time to have the conversation. I think it's essential that we have it. And, if it's not going to happen, in our national political discourse, then nothing that stops us from having, we're citizens.

Remember.

Matt Prewitt: Frank McCourt, thank you so much for taking the time for the conversation. And, thank you for all the work that you're doing.

Frank McCourt: Thanks, Matt. I've enjoyed the conversation once again. Talk soon.

Matt Prewitt: All right. Talk to you soon. Thank you.

Frank McCourt: Take care. Bye bye.