
Steven Bartlett with Karen Hao
The field of artificial intelligence operates without a scientifically agreed upon definition of human intelligence. This deliberate ambiguity allows tech leaders to constantly redefine artificial general intelligence based on their immediate goals. When lobbying for deregulation, executives frame the technology as a cure for global crises. When pitching to investors, they promise massive revenue generation and economic automation. This fluid definition serves as a tool to mobilize capital and ward off regulatory oversight.
Sam Altman secures resources and alliances by strategically mirroring the fears and ambitions of key stakeholders. When recruiting Elon Musk to cofound a research nonprofit, Altman adopted Musk's exact rhetoric regarding the existential threat of machine intelligence. Years later, internal documents and executive testimonies revealed that Altman actively pitted internal teams against one another and obscured facts from his own board. His polarizing leadership style consistently alienated cofounders, prompting key figures to depart and launch rival organizations to pursue their own technological visions.
Leading artificial intelligence companies function as modern empires by unilaterally claiming resources and monopolizing knowledge production. They consume the intellectual property of writers and artists without compensation to train their models. Simultaneously, these organizations bankroll the majority of global academic research, effectively setting the scientific agenda and silencing critics who publish inconvenient findings about the harms of their products. This consolidation of power ensures that the public receives an artificially positive narrative about technological progress.
Industry executives deliberately cultivate public anxiety about the catastrophic risks of their own creations. By warning that their software could either destroy humanity or usher in a utopian era of abundance, they present themselves as the only capable guardians of the future. This dual narrative of doom and salvation is a calculated act of mythmaking. It justifies an anti democratic approach to development, convincing policymakers that public interference is too dangerous and that control must remain concentrated among a few unelected corporate leaders.
The pursuit of automated intelligence fundamentally degrades the labor market by eliminating entry level professional roles and replacing them with precarious gig work. Laid off professionals frequently transition into data annotation, a highly stressful, low paying job requiring them to train the exact software that replaced them. Third party contracting firms pit these hidden human workers against each other for sporadic tasks. This dynamic erodes career mobility and forces highly educated individuals into digital assembly lines with no job security or basic worker protections.
The exponential scale of software training requires colossal physical infrastructure that exacts severe environmental costs on vulnerable communities. Corporations construct massive supercomputing facilities that drain municipal power grids and compete with local residents for fresh water supplies. In places like Memphis, companies deploy methane gas turbines to power their servers, pumping toxins directly into working class neighborhoods already suffering from environmental racism. The push for computational supremacy directly sacrifices the physical health and resources of localized populations to fuel digital products.
The industry's current trajectory prioritizes building massive, generalized statistical engines that consume tremendous resources, akin to rockets. These broad systems are designed to automate human labor and consolidate market dominance. Conversely, companies largely ignore the development of highly specific, resource efficient tools, which function more like bicycles. Targeted systems that predict protein folding or assist medical diagnoses require a fraction of the computational power and offer immense public benefit without the massive environmental and social collateral damage of generalized models.
Reversing the harms of unregulated technological expansion requires active democratic pushback and the dismantling of imperial corporate structures. Citizens and lawmakers are successfully stalling predatory data center expansions through localized protests and municipal bans. Concurrently, creators and victims are filing lawsuits to reclaim their intellectual property and demand accountability for algorithmic harm. By withholding data and rejecting the narrative of inevitable adoption, the public can force the industry to abandon its exploitative practices and develop specialized tools that genuinely serve human needs.