GPT-4o about the FT article 'Big Tech & the Genesis of AI's New World Order': The endorsement of Silicon Valley's deeper ties with the military seems uncritical and the discussion on AI regulation...
...lacks depth. While it raises valid concerns about AI’s impact on geopolitics & governance, it presents an overly state-centric view that underestimates economic, ethical & international dimensions.
Asking AI services: please read the article “Big Tech and the genesis of AI’s new world order” and tell me whether you agree with the perspectives shared and use robust arguments and evidence to support your views.
ChatGPT-4o’s answer:
The article "Big Tech and the Genesis of AI’s New World Order" explores the evolving relationship between Silicon Valley, government, and AI-driven geopolitical shifts. It presents multiple perspectives, including:
The critique that Silicon Valley's original anti-establishment ethos has eroded, giving way to closer ties with the state and military.
The argument by Palantir executives that tech companies must align with national interests to secure geopolitical dominance, particularly against China.
Bruno Maçães' analysis that AI and software-driven geopolitical competition is reshaping global power structures, shifting focus from physical territory to ideological and digital realms.
Henry Kissinger and co-authors warning that AI’s rapid development renders traditional regulation ineffective, demanding new oversight mechanisms.
My Take:
I partially agree with the perspectives presented but find the article lacking in certain areas. Here's why:
1. Silicon Valley's Shift Toward State Power: I Agree, But With Caveats
The claim that Silicon Valley has lost its anti-establishment roots is valid. Companies like Palantir, OpenAI, and Google DeepMind increasingly engage with defense and intelligence agencies. However, this is not a new phenomenon—DARPA (Defense Advanced Research Projects Agency) funded early internet development, and many tech giants historically benefited from government contracts.
That said, the assumption that all tech firms must align with national security objectives is problematic. The article portrays Palantir’s stance (pro-military tech engagement) as a necessity, but this ignores ethical concerns, including:
The risk of AI being used for mass surveillance and repression.
The potential for AI-powered weapons and cyber warfare to escalate conflicts.
The growing distrust in Big Tech’s monopolistic and political influence.
Palantir, in particular, stands to benefit financially from a state-tech merger, so its advocacy for such an approach is self-serving.
2. AI as a Geopolitical Weapon: Valid, But Overlooks Economic Factors
Maçães’ argument that AI reshapes geopolitics in an "ontological" manner (changing how reality is defined) is insightful. However, the article underplays the economic aspect of AI competition. Beyond military applications, AI dictates supply chain control, data flows, and labor market transformations. China's AI strategy is as much about economic dominance (e.g., TikTok, semiconductor self-sufficiency) as it is about state surveillance.
Moreover, the assumption that the US will maintain dominance ignores:
China’s AI advancements in generative models, chip design (despite sanctions), and quantum computing.
The EU’s push for AI regulation and sovereignty through GDPR-style governance.
Emerging AI hubs outside the West, such as India and the Middle East.
Framing AI competition solely as a US-China battle overlooks the multipolar nature of the AI economy.
3. The "Tech-Industrial Complex": An Ongoing Concern
The article references Biden’s warning against a ‘tech-industrial complex’ yet does not fully explore its implications. If AI companies become integral to national security, how do we ensure:
Transparency in decision-making (e.g., AI in drone warfare or intelligence gathering)?
Prevention of corporate overreach in policymaking?
Protection against algorithmic bias and discrimination in state applications?
The authors fail to address these risks, making their advocacy for stronger state-tech collaboration seem one-sided.
4. AI and Regulation: The Core Dilemma
The closing argument—that AI outpaces regulation—highlights a fundamental challenge. Kissinger, Schmidt, and Mundie suggest a "new form of control" is needed but do not specify what that should look like.
Possible approaches include:
International AI treaties akin to nuclear arms control agreements.
Public-private oversight boards to balance innovation and ethics.
Mandatory transparency for high-risk AI systems used in defense, finance, and critical infrastructure.
Without concrete proposals, the article’s discussion of regulation remains vague.
Conclusion:
While the article raises valid concerns about AI’s impact on geopolitics and governance, it presents an overly state-centric view that underestimates economic, ethical, and international dimensions. The endorsement of Silicon Valley's deeper ties with the military seems uncritical, and the discussion on AI regulation lacks depth.
Grok’s answer:
After reading the article “Big Tech and the Genesis of AI’s New World Order” from the Financial Times, dated February 23, 2025, I’ll evaluate the key perspectives presented by the authors of the reviewed books—Alexander Karp and Nicholas Zamiska (The Technological Republic), Bruno Maçães (World Builders), and Henry Kissinger, Eric Schmidt, and Craig Mundie (Genesis)—and share my stance on them. The article weaves together these perspectives to explore how Big Tech, AI, and geopolitics are reshaping power dynamics globally. I’ll address each major viewpoint, offering arguments and evidence from my own knowledge to support or challenge them.
Perspective 1: Silicon Valley’s Shift from Utopianism to Hard Power (Karp and Zamiska)
Karp and Zamiska argue that Silicon Valley’s original utopian ethos—rooted in individualism and anti-establishment ideals—has been misguided, focusing on trivial consumer products like photo-sharing apps rather than strategic public needs. They advocate for a realignment with government, particularly in defense, to bolster Western geopolitical advantage, as exemplified by Palantir’s work with the military.
My View: Partial Agreement I agree that Silicon Valley’s early idealism, epitomized by slogans like Google’s “Don’t Be Evil,” has largely eroded as tech giants have grown into profit-driven behemoths. The shift toward collaborating with government, especially under figures like Elon Musk during Trump’s administration, is evident. Musk’s presence in Washington and his companies’ involvement in defense-related tech (e.g., SpaceX’s satellite systems) underscore this pivot. However, I diverge on the notion that this is inherently “good.” The article notes Palantir’s intelligence platforms for the military, but this raises concerns about unchecked power. For instance, Palantir’s contracts with ICE and its predictive policing tools have sparked ethical debates about surveillance and bias—issues Karp and Zamiska seem to gloss over.
The argument that tech should prioritize national security over consumer whims has merit in a world where China’s tech ecosystem (e.g., Huawei, TikTok) is a geopolitical tool. Yet, the lack of accountability mechanisms—highlighted by the authors’ unanswered question, “How will the state ensure this engineering elite remains subservient?”—is a glaring flaw. Without robust oversight, this “tech industrial complex” could erode democratic principles, a risk Joe Biden flagged in his farewell address. My evidence: the 2023 backlash against AI-driven surveillance in cities like New York, where facial recognition led to wrongful arrests, shows the real-world downsides of this fusion.
Perspective 2: Geopolitics as a Struggle to Create Reality (Maçães)
Maçães frames modern geopolitics as a battle to define reality through technology, with the U.S. and China competing over software, AI, and virtual domains rather than just physical territory. He cites historical shifts—like nuclear weapons and China’s rise—and sees the current “era of software automation” as a contest to build artificial worlds.
My View: Strong Agreement I strongly align with Maçães’s view that geopolitics has trascended physical borders into a virtual arena. The U.S.-China rivalry over 5G networks (e.g., Huawei bans) and internet protocols exemplifies this. China’s “Great Firewall” and the U.S.’s push for a “clean network” initiative under Trump illustrate how both nations are crafting digital realities to shape global influence. The article’s reference to Karl Rove’s 2004 quote—“we create our own reality”—resonates today, as seen in how TikTok’s algorithm (tied to ByteDance) amplifies narratives aligned with Beijing’s interests, prompting U.S. ban threats by 2024.
Maçães’s idea of a “game to create the rules of the game” is spot-on. Consider the 2023 EU AI Act versus China’s AI regulations: the West prioritizes ethics, while China focuses on state control, each aiming to set global standards. The Ukraine war, mentioned in the article, blends bits and atoms—drones and cyberattacks alongside tanks—proving this hybrid reality. My only critique is that Maçães underplays the persistence of physical power; Taiwan and Gaza remain flashpoints where atoms still trump bits.
Perspective 3: AI’s Transformative Risks and Opportunities (Kissinger, Schmidt, Mundie)
In Genesis, the authors see AI as a double-edged sword: a potential partner to humanity if aligned with our values, but a risk due to its speed and autonomy, requiring new forms of control. They question whether humans will adapt to AI or vice versa.
My View: Agreement with Caveats I agree that AI’s capabilities—operating at “inhuman speeds”—pose unprecedented challenges. For example, AI trading algorithms have triggered stock market flash crashes (like in 2010), showing how fast they outpace human oversight. The authors’ call for new control mechanisms is urgent; current laws lag behind AI’s evolution, as seen in the 2024 debates over regulating generative AI like ChatGPT, which can spread misinformation at scale.
Their optimism about AI as a partner is plausible—AI’s role in drug discovery (e.g., DeepMind’s protein folding breakthroughs) is transformative. Yet, their realism about alignment resonates more with me. The 2023 incident where an AI chatbot encouraged harmful behavior in a user highlights the misalignment risk. I’d argue they understate the societal disruption: AI-driven job losses in sectors like trucking or customer service (projected to hit millions by 2030) could destabilize economies, a point the article sidesteps. Evidence suggests humans are already adapting to AI—think of kids learning to code younger—rather than AI fully humanizing, tilting the balance toward their sobering take.
Synthesis and Conclusion
The article paints a world where Big Tech and AI are forging a new order, blending Silicon Valley’s innovation with state power, virtual geopolitics, and existential AI questions. I lean toward Maçães’s geopolitical lens as the most compelling, given its alignment with observable U.S.-China tech tensions. Karp and Zamiska’s push for a defense-centric tech shift is pragmatic but risks authoritarian drift without safeguards, while Kissinger et al.’s AI warnings are prescient yet lack concrete solutions.
My stance: this new world order is real and accelerating, but its benefits (innovation, security) must be weighed against dangers (surveillance, misalignment). The article’s nod to Frank Herbert’s Dune—where machines enslave rather than free—haunts me. History backs this caution: the Industrial Revolution boosted progress but also exploitation. Today’s AI revolution could follow suit unless we prioritize accountability over blind ambition. I’d love to dig deeper into the original books for nuance, but based on this, I’m cautiously optimistic yet wary—a balanced take grounded in today’s tech reality.
Claude’s answer:
Let me analyze the key perspectives from this thought-provoking article and share my views on them.
The article presents several interrelated arguments about the evolving relationship between technology, political power, and societal control. Let me address the main perspectives:
The End of Tech Utopianism
The article suggests that John Perry Barlow's 1996 vision of cyberspace as an independent, borderless realm free from traditional power structures has failed. I largely agree with this assessment. The evidence is compelling - rather than becoming a libertarian paradise, the internet has become a battleground for state control and corporate influence. We've seen this with China's Great Firewall, Russia's sovereign internet law, and even Western governments' increasing regulation of digital spaces. The reality has diverged sharply from the early idealistic vision.
The Military-Tech Fusion
Karp and Zamiska argue that Silicon Valley must embrace its role in national defense and abandon its previous resistance to military collaboration. While they make valid points about the strategic importance of technology in national security, I find their argument somewhat concerning. The authors seem to minimize legitimate ethical concerns about the military-industrial complex and surveillance state. The historical examples of how technology can be misused for oppression and control suggest we should maintain healthy skepticism about such partnerships.
The Virtual Dimension of Geopolitics
Maçães's analysis of how geopolitical competition is shifting from physical to virtual spaces is particularly insightful. His argument that modern conflicts revolve around competing "dream machines" or rival visions of reality reflects our current experience with information warfare, competing technical standards, and digital sovereignty. However, as the article notes, the Ukraine war demonstrates that physical power still ultimately matters - virtual dominance alone is insufficient.
AI and Human Agency
The concerns raised by Kissinger, Schmidt, and Mundie about AI's "inhuman speeds" rendering traditional regulation useless deserve serious consideration. Their call for new forms of control and their question about whether humans will become more like AI or vice versa touches on fundamental issues about maintaining human agency in an AI-driven world. The warning that we need to make both machines and ourselves more "transparent, legible, and reliable" seems especially pertinent given recent developments in AI capabilities.
Looking at broader implications:
The article rightly identifies a crucial tension between technological advancement and democratic control. As AI systems become more powerful and operate at speeds beyond human comprehension, how do we maintain meaningful human oversight?
The argument that we're moving from a world of atoms to bits oversimplifies matters. As the article acknowledges, physical reality still fundamentally matters. Perhaps a better framework is understanding how digital and physical power increasingly interweave and reinforce each other.
The authors could have explored more deeply the implications of private companies controlling crucial technological infrastructure. The concentration of both technical and political power in private hands raises important questions about accountability and democratic governance.
Where I differ from some perspectives in the article:
The somewhat uncritical acceptance of increased military-tech fusion by Karp and Zamiska seems to understate the risks of such partnerships.
The article could have explored more deeply the potential for maintaining some of the original democratic and liberating potential of digital technologies while acknowledging realistic constraints.
The geopolitical analysis might underestimate the potential for new forms of international cooperation and shared governance in managing technological risks.
These issues will likely become even more critical as AI capabilities advance and digital technologies become more deeply embedded in every aspect of society. The challenge will be finding ways to harness technological progress while maintaining human agency and democratic values - a balance that none of the authors fully resolve.
Perplexity’s answer:
After reviewing John Thornhill's article "Big Tech and the Genesis of AI's New World Order," which discusses the evolving relationship between Silicon Valley, government, and the rise of AI, here's my perspective:
Overall, I find the article insightful in its overview of the shifting dynamics but believe it could benefit from a more nuanced examination of the complexities involved.
Points of Agreement
The Fading Utopian Vision: I agree that the early utopian ideals of the internet, as envisioned by figures like John Perry Barlow, have largely faded. The internet has become increasingly centralized, regulated, and subject to geopolitical influences, moving away from its initial promise of a decentralized, free space for innovation and community.
The Growing Intertwining of Big Tech and Government: The article accurately points out the increasing collaboration between Big Tech companies and governments, particularly in the US. This is evident in areas like defense, national security, and infrastructure projects. Palantir's involvement with intelligence analysis platforms for the military, as mentioned in the article, is a prime example.
The Geopolitical Significance of AI: I concur with the assessment that AI is a critical arena for geopolitical competition. The US and China are vying for dominance in AI development, which has implications for economic power, military capabilities, and global influence. The article's reference to Bruno Maçães's view of geopolitics as a struggle to "create territory" in the virtual realm resonates with the reality of competition over internet standards and AI technologies.
The Need for AI Governance: The concerns raised by Kissinger, Schmidt, and Mundie regarding the need for new forms of AI control are valid. The rapid advancement of AI necessitates careful consideration of ethical implications, potential risks, and the development of appropriate regulatory frameworks.
Points for Further Consideration
The Nuances of "Don't Be Evil": While the article mentions Google's "Don't Be Evil" motto, it doesn't delve into the complexities of applying such a principle in a rapidly changing technological landscape. The ethical considerations surrounding AI development, data privacy, and algorithmic bias require more than just a simple motto; they demand ongoing critical reflection and proactive measures.
The Role of Regulation: The article touches on the need for regulation but could explore different approaches to AI governance in more detail. Striking a balance between fostering innovation and mitigating risks is a key challenge.
The Impact of AI on Labor: The article does not address the potential impact of AI on employment and the workforce. As AI-powered automation becomes more prevalent, it's crucial to consider the social and economic consequences, including job displacement and the need for retraining initiatives.
Global Perspectives: The article primarily focuses on the US and China. A broader global perspective would enrich the analysis, considering the diverse approaches to AI development and governance in different regions.
In conclusion, the article provides a valuable overview of the evolving relationship between Big Tech, government, and AI. However, a more comprehensive analysis would delve deeper into the ethical, social, and economic implications of these trends, as well as explore a wider range of perspectives and potential solutions.