Andreessen Horowitz general partner Martin Casado argues that current efforts to regulate artificial intelligence are misdirected, focusing too much on hypothetical risks rather than addressing the actual, present-day concerns AI poses. Speaking at TechCrunch Disrupt 2024, Casado, who leads a16z’s $1.25 billion infrastructure practice and has invested in AI startups like World Labs and Cursor, emphasized that lawmakers are trying to create "new" regulations without drawing lessons from past technology governance.
Casado highlighted that many proposed AI policies lack clear definitions for AI itself, complicating regulatory efforts. He pointed to California’s SB 1047 as an example of ineffective regulation. This law, which was ultimately vetoed, proposed a “kill switch” for large AI models. Opponents, including Casado, argued that such provisions, based on fears of speculative AI risks, could instead disrupt California's thriving AI sector.
Casado shared concerns that such ill-formed laws discourage AI innovation. He noted that some founders hesitate to establish their businesses in California due to the state’s perceived stance on AI regulation. Casado fears that future policies, driven more by public anxiety than by expert insights, might restrict AI growth.
Casado has a background in AI and infrastructure technology, having previously founded two companies, including Nicira, which VMware acquired for $1.26 billion. He believes that effective AI regulation should differentiate AI from other technologies like search engines or the internet, focusing on its unique "marginal risks." This approach, he argues, could guide the development of policies that address the specific challenges AI presents.
He also contends that regulatory bodies already have extensive frameworks, built over decades, that could be adapted to address AI without imposing entirely new regulations. At the federal level, organizations like the FCC and the House Committee on Science, Space, and Technology could play roles in overseeing AI, much as they do for other technologies.
Casado acknowledges the view that early regulation might have prevented some of the unforeseen harms of social media, such as data privacy issues and online echo chambers. However, he contends that AI should not bear the consequences of regulatory shortcomings in social media. Instead, he argues that regulators should address issues within each technology field individually.