[Summary] E129: Sam Altman plays chess with regulators, AI's "nuclear" potential, big pharma bundling & more - YouTube (2023)


In this episode of the All In podcast, hosts David Friedberg, David Sacks, and Shamath Palihapitiya discuss the current landscape of artificial intelligence (AI) and the recent proposal for AI regulation by Sam Altman, CEO of OpenAI.

The podcast begins with a humorous "Reddit performance review" segment where the hosts read out audience feedback posted on Reddit about their performances on the show. They then move on to discuss the increasing demand for their All In Summit event and the plans for it.

The focus then shifts to a recent Senate hearing where Sam Altman, along with Gary Marcus (Professor from NYU) and Christina Montgomery (Chief Privacy and Trust Officer from IBM), testified about AI. Sam Altman proposed that a separate agency should be established to oversee AI, suggesting that the agency should issue licenses to train and use AI models. This would essentially regulate the development and application of AI models, and Sam Altman's OpenAI would potentially have a significant influence in shaping these regulations due to its standing in the field.

The hosts discuss their thoughts on this proposal. Shamath Palihapitiya, who predicted this move towards regulation two months ago, finds it interesting that Silicon Valley, the innovators of AI, are taking a more cautious stance compared to Wall Street, which is ready to place bets on AI. David Sacks criticizes Altman's proposal, arguing that it's a form of regulatory capture, where OpenAI aims to protect its position by creating barriers for others.

David Friedberg points out that with AI models becoming smaller, more portable, and even downloadable, it would be difficult for any regulatory agency to audit every computer or server running these models. Hence, suggesting regulatory control may be an attempt to divert attention away from the increasing ubiquity and democratization of AI models.


The speakers in the conversation discuss the ongoing evolution of language models like ChatGPT and their increasing migration to the edge of the network. They debate about the feasibility and implications of trying to regulate such rapidly advancing and proliferating technologies.

The discussants suggest that it might be a nearly impossible task to track, approve, and audit language learning models (LLMs) and the servers running them. They cite the example of Hugging Face, an open-source repository of LLMs, and its leaderboard of various models which are surpassing those developed by big tech companies like OpenAI and Facebook's BART.

The speakers criticize the proposal of government regulation of these models, arguing that it could stifle permissionless innovation, require unnecessary lobbying, and introduce lengthy approval processes. They also draw parallels to the bureaucratic inefficiencies of government agencies like the DMV.

However, they also acknowledge the potential harms of misuse of AI and LLMs, drawing a comparison with nuclear technology, whose use is mostly beneficial but can have devastating consequences when used destructively.

The conversation delves into the motives behind tech industry leaders endorsing regulations. Some suggest it may be a strategic move to gain favor with regulators and create a moat around their operations, making it harder for competitors to catch up. Others suggest it may be a genuine desire to limit potentially harmful uses of AI.

Finally, one speaker argues for introducing "stage gates" before running large-scale models, suggesting a form of "know your customer" (KYC) verification to ensure that those creating and running these models aren't intending to cause harm. However, the practicalities and specifics of such a system remain undefined.


In this part of the discussion, the hosts talk about the potential dangers of AI misuse, comparing its potential to cause harm to that of nuclear weapons. The discussion is focused on the risks of AI technology being used in ways that could cause serious harm, such as in the creation of harmful chemical compounds or potentially disruptive cyber-attacks.

There's a back-and-forth debate about whether AI regulation should be imposed, with some arguing that the technology is moving very quickly and should be carefully managed to avoid misuse. However, others argue that it's not clear how to regulate AI effectively, and that it might be premature to stop progress in this area before we have seen the full potential of the technology.

The hosts also raise concerns about AI's potential impact on employment, with fears that AI could make many jobs obsolete and lead to economic disruption. But some argue that these fears are misplaced and that the technology could also create new opportunities and make the economy more efficient.

Towards the end of the discussion, they touch on the recent appointment of Linda Yaccarino, former head of ad sales at NBC Universal, as the new CEO of Twitter under the leadership of Elon Musk. They think it's a sensible choice given that Twitter's business model is based on advertising and Yaccarino has a strong background in this area, complementing Musk's interests and expertise in technology and product design.


In Part 4 of the conversation, the panel discusses the backlash that the newly appointed Twitter CEO, Linda Johnson, has faced from both the political left and right for her views and social media activity. Despite the criticism, one of the speakers asserts that having a leader that is disliked by both sides might indicate a good choice, although her effectiveness will only be clear in six to nine months.

They also talk about the diversity in Elon Musk's leadership team across Tesla, Twitter, and SpaceX, contrasting it with those who engage in virtue signaling without substantial action. They express concerns about the current level of censorship built into social media algorithms, especially regarding COVID-19 discussions.

A significant part of the conversation is devoted to the pharmaceutical industry, specifically the Federal Trade Commission's (FTC) decision to block the acquisition of Horizon Therapeutics by Amgen. While some panelists express concern about this move inhibiting the research and development of new drugs, others argue that the FTC's move is justified due to concerns over drug price inflation and stifling competition.

In the context of drug research and patents, they discuss the financial risks involved in pharmaceutical research, particularly for young, early-stage biotech companies. They express the fear that blocking such acquisitions might discourage investors, leading to less funding for these startups.

The discussion concludes with admiration for the capabilities of OpenAI's GPT-3.5 language model, particularly its ability to summarize lengthy documents. They express excitement over the launch of the OpenAI application, predicting a significant increase in its user base.


In the fifth part of the discussion, they begin talking about anti-competitive tactics used by large tech companies like Microsoft. They argue that these companies pose a risk by favoring their own applications over others or using bundling tactics, which can create an unhealthy tech ecosystem. The commentators suggest that the issue is with tactics, not acquisitions, meaning if a company buys another and lowers prices, increasing consumer choice, and encouraging more people to invest in innovation, then that's positive. However, if the tactics reduce consumer choice or artificially keep prices high, then they are problematic.

Next, they discuss the prospect of Apple's AR (Augmented Reality) headset, which is projected to cost around $3,000 and is expected to be revealed in June. It's a deviation from Apple's typical strategy of waiting until all consumers can afford a product before releasing it. The headset appears to be a product under development rather than a finished product. Its potential killer application is said to be a FaceTime-like live chat experience. The hosts agree that until they see the product and its capabilities, they can't definitively comment on its impact.

Towards the end, they discuss a Gallup survey that indicates a record low number of Americans (21%) believe it's a good time to buy a house. The hosts interpret this as a sign of plummeting consumer confidence and a possible indicator of economic instability. High mortgage rates and dwindling home affordability are among the reasons cited for this decline. The lack of fluidity in the housing market due to these conditions could negatively impact price discovery and reduce mobility for people seeking better opportunities.

Lastly, the conversation touches upon commercial real estate in San Francisco. According to a local broker, there are numerous vacant office towers, particularly in the SoMa district, which are becoming "zombie" properties. There's not enough demand from AI companies or VC-backed startups to fill these spaces. Furthermore, landlords don't have the capital for tenant improvements demanded by these startups, and they are not considered creditworthy tenants. This could lead to a banking problem as loans may need to be written off due to the depreciating value of these properties.


In the last part of the All In Podcast Episode 129, the hosts discuss the current state of real estate, particularly in the context of setting up an incubator in San Mateo. They find that demand is focused on top locations and buildings with high-quality amenities, leading to sustained high prices. However, commodity office spaces are seeing reduced prices.

The discussion then shifts to two incidents - a shooting involving a shoplifter in San Francisco and an event in New York where a marine tried to subdue a violent homeless person. They criticize the media's portrayal of these cases and bring up the role of mental health issues, particularly in the latter case. They suggest converting post offices into mental health facilities as a possible solution.

The conversation then turns to George Soros, a billionaire who has reportedly funded District Attorney elections to bring about changes in law enforcement. His Open Societies Foundation, which supposedly spreads democracy and liberal values but is perceived by many as meddling in the internal affairs of countries, is also discussed. The hosts end by expressing their willingness to interview Soros or his son on the podcast to provide them with a platform to explain their actions.

Top Articles
Latest Posts
Article information

Author: Aracelis Kilback

Last Updated: 03/01/2023

Views: 5466

Rating: 4.3 / 5 (64 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Aracelis Kilback

Birthday: 1994-11-22

Address: Apt. 895 30151 Green Plain, Lake Mariela, RI 98141

Phone: +5992291857476

Job: Legal Officer

Hobby: LARPing, role-playing games, Slacklining, Reading, Inline skating, Brazilian jiu-jitsu, Dance

Introduction: My name is Aracelis Kilback, I am a nice, gentle, agreeable, joyous, attractive, combative, gifted person who loves writing and wants to share my knowledge and understanding with you.