Greg Brockman predicts AI will let small teams match the output of large ones if they can afford the compute

Greg Brockman, co-founder and president of OpenAI, envisions a future where artificial intelligence empowers small teams to achieve productivity levels comparable to those of large organizations, provided they can secure sufficient computational resources. Speaking at the inaugural AI Summit in Paris, Brockman highlighted how AI is reshaping the landscape of software development and innovation. He emphasized that the key differentiator will no longer be team size but access to compute power, which serves as the primary constraint in leveraging advanced AI models.

Brockman drew from OpenAI’s own experiences to illustrate this shift. Historically, building sophisticated AI systems required massive teams of engineers working over extended periods. For instance, early language models demanded thousands of person-years of effort. However, recent advancements have dramatically accelerated this process. Brockman noted that OpenAI’s latest model, referred to as o1, was developed by a team one-tenth the size of the group behind GPT-4, yet delivered superior reasoning capabilities. This represents a tenfold improvement in efficiency, underscoring AI’s role in amplifying human output.

The core of Brockman’s prediction lies in AI’s ability to automate and enhance complex cognitive tasks. He described how AI agents can now handle intricate reasoning chains, debugging code, and even devising novel solutions independently. In one example, Brockman recounted solving a challenging physics problem using an AI model. The model not only arrived at the correct answer but also provided a step-by-step explanation that deepened his understanding of the concept. Such capabilities extend beyond individual tasks to entire workflows, enabling small teams to tackle projects that once required hundreds of specialists.

Compute emerges as the critical bottleneck in this equation. Brockman explained that training and running state-of-the-art AI models demand enormous amounts of processing power, often measured in GPU-hours or datacenter-scale resources. While model architectures and algorithms continue to improve, the escalating need for compute means that only entities able to invest heavily can fully capitalize on AI’s potential. Small teams or startups, despite their agility, must compete for these scarce resources, which are dominated by a few hyperscalers and well-funded labs.

Brockman addressed the democratization of AI access, pointing to falling inference costs as a positive trend. Running AI models has become cheaper over time, allowing broader experimentation. Yet, he cautioned that training new frontier models remains prohibitively expensive, potentially centralizing power among a handful of players. OpenAI itself grapples with this, relying on partnerships with Microsoft Azure for vast compute clusters. Brockman advocated for continued investment in hardware innovation, such as custom AI chips, to alleviate shortages and drive costs down further.

Looking ahead, Brockman predicted a proliferation of AI-powered “small teams” that punch above their weight. He likened this to historical technological shifts, where tools like compilers and integrated development environments leveled the playing field for programmers. AI, he argued, acts as a force multiplier, compressing timelines from years to months or weeks. A solo developer augmented by AI could rival a mid-sized engineering firm, fostering a renaissance of independent innovation.

This vision carries implications for industries beyond software. In research, small labs could replicate the discoveries of large institutions. In business, nimble startups might disrupt incumbents by rapidly iterating products with AI assistance. Brockman stressed ethical considerations, including safety alignments baked into models like o1, which prioritize reliable reasoning over raw speed.

Challenges persist, particularly around compute equity. Brockman called for policy measures to encourage compute abundance, such as expanded energy infrastructure for datacenters and streamlined permitting for new facilities. He also highlighted OpenAI’s internal practices, like systematic AI usage across teams, which have boosted overall velocity.

In summary, Brockman’s outlook positions AI as a great equalizer for team productivity, contingent on compute affordability. As AI evolves, the barrier to matching large-scale output diminishes for those who can harness the necessary resources, heralding an era where ingenuity and investment in compute define competitive edges.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.