By Paolo Falconio * –
This essay analyzes the role of artificial intelligence (AI) as a new structural factor in geopolitical competition in the 21st century, highlighting how it has become a critical infrastructure for state sovereignty and a multiplier of economic, military, and regulatory power. The paper explores three global governance models — American, Chinese, and European — emphasizing their profound value-based and institutional divergences and the resulting competition for normative hegemony. It also delves into Sino-American rivalry, with particular attention to the control of the semiconductor supply chain, the militarization of AI, and systemic risks associated with automated decision-making.
A dedicated section addresses the position of the Arab world, analyzed in its internal heterogeneity between the modernization ambitions of the Gulf monarchies, structural constraints of North African countries, and challenges in post-conflict contexts. The essay further discusses the expansion of digital authoritarianism, new forms of technological dependence, and emerging challenges for the epistemic sovereignty of states. Finally, it examines the fragmentation of global AI governance and the growing contradiction between the technology’s potential for ecological transition and its significant energy footprint.
While providing a systemic and interdisciplinary overview, some topics — particularly domestic socio-economic impacts and the perspectives of the Global South beyond the MENA region — are not fully explored due to space constraints and will be the subject of future analytical developments.
Artificial intelligence (AI) has established itself as one of the main drivers of contemporary geopolitical transformations, transcending its purely technological dimension to become a critical state infrastructure and a strategic resource in international competition. The ability to develop, control, and implement advanced AI systems now determines a state’s position in the global power hierarchy, analogous to the historical significance of controlling energy resources, trade routes, or manufacturing capacity (1).
AI’s specificity compared to other strategic technologies lies in its pervasive and dual-use nature: it simultaneously permeates civil, military, economic, and social sectors, making the boundaries between national security, economic development, and social control highly porous. Technological dominance in AI requires control over a complex and interdependent ecosystem: advanced semiconductors, cloud architectures, massive datasets, specialized skills, and computational capacity. This value chain, highly concentrated geographically and technically, generates new forms of strategic dependence and systemic vulnerability for excluded states.
Divergent AI Governance Models and Global Normative Competition.
AI governance revolves around three fundamental paradigms reflecting different conceptions of the relationship between the state, market, technology, and citizen: the U.S. liberal-market model, the Chinese centralized-authoritarian model, and the European regulatory-guarantee model. These approaches are not merely technical variants but embody deeply divergent political and value visions, giving rise to competition for global normative hegemony.
The U.S. paradigm is based on a governance architecture characterized by minimal ex-ante regulation, strong entrepreneurial dynamism, and the predominance of private actors in defining technological standards. Silicon Valley tech giants — Google, Microsoft, Amazon, Meta, OpenAI — do not operate merely as commercial enterprises but function de facto as private legislators of the global digital ecosystem, setting technical protocols, ethical norms, and operational practices that are then adopted Worldwide (2). This model offers significant advantages in terms of innovation speed and ability to attract talent and capital but raises critical questions about the democratic legitimacy and accountability of private power structures performing quasi-governmental functions without electoral mandate or democratic oversight. Increasing awareness of these risks has led to growing debates in the United States on the need for more robust regulation, as evidenced by congressional hearings and sector-specific legislative proposals concerning privacy, antitrust, and algorithmic security. In this regard, Palantir deserves mention. This company aggregates data from government sources and private clients. In fact, it is capable of creating a personal profile for every single citizen, whose level of dangerousness is determined by an algorithm. If such profiling relates to concrete threats (such as, for example, the risk of terrorism), that is one thing. But what if the algorithm also predicts a risk associated with critical thinking? Then the American model would turn out to be not so different from China’s Social Credit System. In short, the risk of moving toward an algocracy is by no means remote.
The Chinese model represents the systematic antithesis of the U.S. approach, centered on strategic planning, massive state intervention in the digital economy, and close integration of technological development objectives with political control imperatives. The “New Generation Artificial Intelligence Development Plan” (2017) and subsequent five-year plans explicitly identified AI as an enabling technology for industrial modernization, military superiority, and reinforcement of “social stability” (3). In this paradigm, AI is not conceived as an autonomous domain of private innovation but as a tool of state power to be directed toward politically defined objectives. The Social Credit System, urban surveillance systems based on facial recognition, and online speech control platforms exemplify this vision, where algorithmic optimization serves political order and Party-State consent. This approach has proven highly effective in accelerating technology adoption and mobilizing large-scale resources but raises international concerns regarding the export of authoritarian digital governance models to third countries, particularly through Belt and Road and Digital Silk Road initiatives.
The European paradigm stands out for its approach centered on the proactive protection of fundamental rights, algorithmic transparency, and legal accountability of AI system developers and users. The AI Act, adopted in 2024 after complex negotiations among EU institutions and member states, represents the first comprehensive attempt internationally to create a regulatory framework based on a risk-based classification of AI systems and differentiated obligations according to risk level (4). This model expresses the European legal tradition of social constitutionalism and preventive protection but faces the crucial challenge of balancing rights protection with economic competitiveness. Critics argue that excessive regulation may discourage innovation and consolidate Europe’s lag relative to the U.S. and China in developing native AI capabilities, while proponents contend that Europe could transform its regulatory advantage into global soft power, as occurred with the GDPR, exporting normative standards through the “Brussels Effect.”
These three models do not coexist peacefully in the global space but actively compete for the adherence of third countries, generating normative fragmentation dynamics and alignment pressures. Medium-sized states and emerging economies increasingly face choices that transcend technical dimensions to assume geopolitical significance, needing to select not only technological suppliers but also associated normative and value packages. This competition for normative influence is a crucial yet often underestimated dimension of contemporary strategic rivalry.
Open Source as an alternative Model: Democratization or Illusion?
Between the U.S. liberal-market model, the Chinese centralized-authoritarian model, and the European regulatory-guarantee model, there exists a conceptual and practical space that should not be overlooked: the AI open-source ecosystem. This is not a marginal technical detail but potentially an alternative technological governance model with profound geopolitical implications.
The open-source movement in AI — embodied by projects such as Hugging Face, Stability AI, BLOOM, LLaMA (despite initial restrictions), and countless community models — is based on principles radically different from the three dominant paradigms:
Radical transparency: code, architectures, and often datasets and model weights are publicly accessible and inspectable. This contrasts sharply with the opacity of Big Tech (which treats models as trade secrets) and the secrecy of Chinese state projects.
Redistributive accessibility: anyone with the skills and hardware can use, modify, and adapt these models without relying on commercial vendors or state authorization. This potentially lowers entry barriers for countries, institutions, and communities unable to afford enterprise licenses or unwilling to depend on strategically problematic suppliers.
Distributed innovation: thousands of researchers, developers, and activists worldwide simultaneously contribute to model improvement, creating a decentralized knowledge network no single actor fully controls.
Bottom-up technological sovereignty: for Global South countries or medium-sized actors, open source offers a potential path to native AI capabilities without replicating the entire proprietary tech stack. A country can take an open-source model, adapt it to linguistic and cultural contexts, train it on local data, and deploy it according to national priorities — all at significantly reduced costs and dependencies.
However, it would be naive to idealize open source as a universal solution. Its contradictions are multiple and instructive:
Control stratification: even in open source, hierarchies exist. Hugging Face is a corporation with investors, internal governance, and commercial interests. Large open-source models still require massive computational resources for training — resources only a few actors can afford. Paradoxically, openness can mask new forms of concentration: the model is open, but those who trained the foundational model still control the base architecture.
Dependence on proprietary hardware: one may have the most advanced open-source model in the world but must still run it on Nvidia chips, AWS/Azure servers, or cloud infrastructure controlled by the very Big Tech the open source movement seeks to bypass. Software freedom clashes with hardware materiality — a fundamental material contradiction no idealism alone can solve.
Political ambiguity: open source can serve emancipation or oppression. An authoritarian government can use an open-source model for mass surveillance as easily as an NGO can use it to monitor human rights violations. Technology is inherently dual-use; code openness does not determine political use.
Sustainability issues: who funds open-source development? Often the same Big Tech companies (Google supports TensorFlow, Meta released LLaMA) that strategically shape ecosystems, attract talent, and prevent regulatory lock-in. Or public institutions with limited resources. Financial dependence generates vulnerabilities: an open-source project can be abandoned, acquired, or redirected according to logics beyond community control.
Despite these contradictions, open source represents a real geopolitical possibility for actors unwilling or unable to compete directly with the U.S. and China:
For Europe: it could overcome the dilemma between rights protection and technological lag. Instead of creating “European champions” to compete with OpenAI and Google (so far unsuccessful), the EU could invest heavily in open-source infrastructure, creating globally accessible digital commons governed by European principles — coherent with its regulatory vocation and capable of generating real soft power.
For the Arab world and Global South: open source could be the most realistic path to genuine technological capabilities. Instead of relying on Western or Chinese vendors, countries such as Morocco, Tunisia, and Egypt could collaborate regionally to adapt open-source models to underrepresented Arabic languages, local cultural contexts, and specific development priorities. This requires investments in skills and infrastructure but is more sustainable than direct competition.
For social movements and civil society: open source provides counterpower tools. When Big Tech censors content, governments surveil, or algorithms discriminate, access to open-source models allows alternatives, claim verification, and abuse documentation — a form of popular digital sovereignty independent of the state.
However, for this third path to be practically viable, deliberate public policies are required: investments in public computational infrastructure, widespread skills training, legal frameworks protecting open source from predatory appropriation, and international cooperation on open standards. Open source alone is insufficient — it requires a political project guiding it toward democratic purposes. It is an optimistic alternative path worthy of exploration.
Sino-American Strategic Competition: A New Technological Cold War?
The U.S.-China rivalry for AI supremacy constitutes the backbone of contemporary technological geopolitics, structuring alliances, defining normative boundaries, and fueling economic-technological decoupling dynamics reminiscent, albeit with significant differences, of Cold War bipolar logic. Unlike the 20th-century rigidly separated blocs, the current competition occurs in a context of deep economic interdependence and integrated global value chains, generating systemic tensions between national security logics and economic efficiency imperatives.
The semiconductor dimension is the most critical battleground. Advanced chips — particularly those produced with 7, 5, and 3-nanometer lithography processes — are the indispensable material infrastructure for training and operating cutting-edge AI models (5). Control of this highly concentrated and technically complex supply chain has become a primary strategic goal for both superpowers. The U.S. has progressively expanded export restrictions to China on advanced semiconductors, manufacturing equipment (notably Dutch ASML EUV lithography systems), and EDA design software to slow Chinese technological advancement and preserve U.S. competitive advantage in critical domains such as supercomputing and military AI. China, in turn, has launched massive investments to achieve semiconductor self-sufficiency (“Made in China 2025”) but faces significant technical obstacles, especially in the most advanced technological nodes. This competition for semiconductor control is considered by some analysts as the “new oil” of the 21st century, with systemic implications for global power balances.
The military dimension of AI competition raises even more alarming questions. AI’s inherently dual-use nature — usable simultaneously for civil and military applications — makes the distinction between commercial innovation and military advantage extremely porous (6). AI systems applied to autonomous warfare, strategic surveillance, command and control, cyber warfare, and intelligence analysis profoundly alter the character of military competition. Autonomous weapon systems capable of identifying and engaging targets without human oversight raise unprecedented ethical and strategic dilemmas, eroding established principles of international humanitarian law such as combatant/non-combatant distinction and proportionality. The risk of unintended escalation from high-speed automated systems is particularly concerning in crisis scenarios, where compressed decision times could generate “flash war” dynamics similar to algorithmic “flash crashes” in financial markets.
Alongside AI militarization, research and practice have emerged focusing on using technology for conflict prevention and peacebuilding. Early warning systems based on machine learning, predictive escalation pattern analysis, AI-assisted mediation platforms, and peace agreement monitoring tools represent promising “peace-tech” applications. However, the effectiveness of these tools depends on a level of international cooperation and data transparency increasingly difficult to achieve amid growing mistrust between rival powers. The paradox is that precisely when technology could enable more sophisticated preventive diplomacy, the deterioration of international relations obstructs not only practical implementation but the very realization of these p3. The Arab World in the Global AI Ecosystem: Between Modernizing Ambitions and Structural Constraints
The Arab World in the Global AI Ecosystem: Between Modernizing Ambitions and Structural Constraints.
Arab states occupy a peculiar and layered position in the geopolitics of artificial intelligence, characterized by the coexistence of modernizing ambitions, structural technological dependencies, and significant internal asymmetries within the region. The diversity of trajectories among the Gulf monarchies, North African states, and post-conflict contexts makes generalizations inappropriate, requiring a differentiated analysis of national AI adoption and governance pathways.
The Gulf monarchies—particularly the United Arab Emirates, Saudi Arabia, and Qatar—have developed long-term strategies identifying artificial intelligence as a key pillar of the post-oil transition and future economic competitiveness (7). The UAE AI Strategy 2031 and Saudi Vision 2030 are not mere programmatic declarations; they are implemented through massive infrastructure investments, strategic partnerships with global technology leaders, aggressive talent and capital attraction policies, and efforts to position themselves as regional hubs for digital innovation. The UAE established the world’s first Ministry of Artificial Intelligence (2017), developed the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) as a center of research excellence, and launched initiatives such as the “AI for Good” program to apply AI to social development challenges. Saudi Arabia has heavily invested in building NEOM, a smart city designed entirely around AI technologies, and has attracted significant investments from global tech corporations seeking access to the regional market. These countries benefit from extraordinary fiscal capacities derived from hydrocarbon revenues, allowing them to finance ambitious technological transformations without the usual budgetary pressures.
However, AI adoption in the Gulf monarchies also faces significant challenges. Dependence on external technology providers—mainly from the United States and China—creates strategic vulnerabilities and limits decision-making autonomy in sensitive areas. The limited presence of indigenous innovation ecosystems, the shallow depth of locally specialized human capital, and the persistent centrality of rentier-based economic models constrain the long-term sustainability of technological strategies. Moreover, the use of AI for social surveillance and political control raises concerns about the compatibility between technological modernization and the persistence of authoritarian structures, suggesting that technological adoption does not automatically lead to democratizing transformations but can instead consolidate forms of technologically sophisticated “digital authoritarianism.”
North African states—Morocco, Tunisia, Egypt, Algeria—present significantly different profiles, characterized by more limited fiscal resources, often unstable political contexts, and less developed digital infrastructure. In these contexts, AI adoption focuses on targeted sectoral applications: digitalization of public services, agricultural optimization through precision farming, telemedicine to extend healthcare access to rural areas, and e-government to improve administrative efficiency. Morocco has developed significant expertise in digital services outsourcing and is attempting to position itself as a technological platform between Europe and Africa. Tunisia has a well-established tradition of quality technical training, generating a significant diaspora of tech talent. Egypt, with its demographic weight and geostrategic position, represents a potentially significant market for AI adoption.
Nonetheless, these countries face significant structural constraints: inadequate digital infrastructure (limited broadband penetration, incomplete mobile coverage in rural areas), brain drain (emigration of technical talent to more lucrative markets), political instability discouraging long-term investment, educational system shortcomings limiting advanced skills formation, and budgetary constraints preventing massive R&D investments. Technological dependence manifests here in even more acute forms, with the adoption of “turnkey” solutions provided by external actors, offering limited room for local customization and learning.
Post-conflict contexts—Syria, Iraq, Yemen, Libya—represent the most critical cases, where AI is predominantly used for military and security applications rather than civilian development. In these scenarios, AI-based surveillance systems, predictive analysis for counterterrorism operations, and automated warfare are the main applications, reinforcing the association between advanced technology and coercive control. State fragmentation, destroyed infrastructure, human capital flight, and persistent insecurity make it extremely difficult to envision inclusive technological development pathways in the short- to medium-term.
Digital Authoritarianism, Technological Sovereignty, and New Power Asymmetries.
A critical dimension of AI geopolitics in the Arab region and beyond concerns the expansion of forms of digital authoritarianism enabled by technology (9). AI provides authoritarian regimes with unprecedented tools for social control in terms of effectiveness, pervasiveness, and predictive capability: facial recognition systems for monitoring public spaces, automated analysis of digital communications, social credit scoring, algorithmic profiling of dissidents and opponents, and computational manipulation of online public discourse.
These systems create an “algorithmic panopticon” that profoundly alters the dynamics between state and society, changing the risk calculations of collective action and dissenting expression. The awareness of potential permanent surveillance induces self-censorship and conformity, making overt repression less necessary, resulting in forms of technologically mediated “soft authoritarianism.” China represents the most advanced laboratory of these practices, but similar technologies and organizational models have been exported or adopted in numerous Arab, African, and Asian contexts.
This spread of digital authoritarianism intertwines with issues of technological sovereignty and strategic dependence. Almost all Arab states rely on external providers for critical AI infrastructure: cloud computing (dominated by Amazon Web Services, Microsoft Azure, Alibaba Cloud), foundational AI models (controlled by OpenAI, Google, Anthropic, Baidu), semiconductors (Taiwan, South Korea, United States), and specialized expertise. This dependence generates multiple vulnerabilities: technical vulnerabilities (backdoors, service interruptions, planned obsolescence), economic vulnerabilities (pricing power of monopolistic suppliers), strategic vulnerabilities (technological blackmail, political conditionalities), and epistemic vulnerabilities (algorithmic biases, cultural inadequacy of systems trained on non-representative datasets).
The debate on Arab digital sovereignty raises fundamental questions: is it possible and desirable to develop native AI capabilities? What are the costs and benefits of technological indigenization strategies? Is it preferable to specialize in application niches rather than compete directly with technological superpowers? How can sovereignty be balanced with global interoperability? Answers to these questions vary significantly across countries, reflecting different strategic calculations on national priorities, available resources, and geopolitical contexts.
Nonetheless, AI also offers significant opportunities for sustainable development in the Arab region: optimized management of scarce water resources through precision agriculture and smart water management; energy efficiency and transition to renewables via smart grids and predictive systems; improved logistics in critical strategic port nodes (Suez, Jebel Ali, Tangier-Med); enhanced healthcare systems through telemedicine and AI-assisted diagnostics; personalized education through adaptive learning systems. The crucial challenge is to create institutional, regulatory, and governance conditions that guide technological adoption toward inclusive development rather than authoritarian control
The Battle for Global Governance: Who Writes the Rules of the Digital Ecosystem?
The fundamental issue at the heart of the geopolitics of artificial intelligence concerns the mechanisms, principles, and actors of global governance: who decides which AI applications are legitimate? Which technical standards will become global? Which ethical principles will govern algorithmic development? How can innovation be balanced with the protection of rights? Who is responsible when AI systems cause harm?
Algorithms are not neutral technical objects; they embed value choices, epistemological assumptions, and political interests. The way an AI system is designed, trained, and implemented reflects decisions about which objectives to optimize, which datasets to use, which biases to tolerate, and which trade-offs to accept between accuracy and fairness. These choices have profound distributive consequences, determining who benefits and who is harmed by the spread of AI.
Global regulatory competition revolves around the ability to impose standards, norms, and practices that reflect particular preferences and interests:
Europe proposes a framework based on fundamental rights, transparency, accountability, and ex-ante impact assessment, attempting to translate the European constitutional tradition into a global regulatory architecture. The success of this strategy depends on generating a “Brussels Effect” in AI similar to that produced by the GDPR, encouraging global actors to adopt European standards to access the European market.
The United States maintains a fragmented, sectoral approach, with de facto standards set by Big Tech practices and a gradual emergence of state and federal regulations in specific areas (privacy, antitrust, content moderation). The strength of the U.S. model lies in continuous innovation and commercial dominance, making American standards hard to avoid even for those seeking alternatives.
China promotes a vision emphasizing state sovereignty, national security, coordinated development, and “socialist values,” seeking to export this model through bilateral technology partnerships, infrastructure investments (Digital Silk Road), and coalitions within multilateral organizations.
This competition takes place across multiple negotiating forums: the United Nations (proposals for a global AI treaty), OECD (developed Principles on AI), UNESCO (ethical recommendations), ITU (technical standards), G7 and G20 (political declarations), and numerous regional and sectoral initiatives. Institutional fragmentation reflects and amplifies substantive disagreements, making the emergence of a truly global and binding governance regime extremely difficult.
A crucial but often overlooked dimension concerns the representation of the Global South in these processes. Most countries lack the technical, diplomatic, and financial capacities to participate effectively in complex AI governance negotiations, risking the imposition of standards developed elsewhere without adequate consideration of their own priorities, vulnerabilities, and cultural perspectives.
Reflection on epistemic dependence—importing AI means importing its biases and worldview—is one of the least discussed aspects in current literature, yet it informs this entire discussion. From governance to indigenous systems, dependence is pervasive. This representation deficit raises fundamental questions about the democratic legitimacy of global AI governance and the possibility of building a genuinely inclusive technological order.
AI and the Ecological Transition: The Energy Contradiction of Artificial Intelligence.
An increasingly relevant dimension of AI geopolitics concerns its environmental impact and the tensions it generates with climate sustainability goals. At COP30 and other international environmental forums, AI has emerged as a point of contention between those presenting it as an indispensable tool for climate mitigation and those highlighting its growing ecological footprint (10).
AI as a tool for climate mitigation finds application in numerous areas: advanced climate modeling for more accurate forecasts; optimization of electrical grids to integrate intermittent renewable sources; precision agriculture to reduce chemical and water inputs; logistics efficiency to minimize transport emissions; identification of deforestation patterns through satellite analysis; design of new materials for energy storage or carbon capture. These uses have legitimized the integration of AI into national and international climate strategies, with some governments and organizations proposing that AI investment be considered part of net-zero strategies.
However, training and operating advanced AI models entails increasing energy and environmental costs, raising questions about the sustainability of the current technological trajectory. Training a single large language model can consume energy equivalent to that used by hundreds of households in a year and generate CO₂ emissions comparable to several transatlantic flights. Data centers hosting AI systems consume massive amounts of electricity and water for cooling, with significant impacts on local power grids and water resources—particularly problematic in regions already under water stress, such as the Middle East and North Africa.
The geography of data centers is thus becoming a geopolitically relevant dimension, with countries rich in low-cost energy (Iceland, Norway, Canada, some Gulf states) positioning themselves as potential hubs for energy-intensive cloud computing. This geography intersects with considerations of digital sovereignty, data security, and climate justice in complex and sometimes contradictory ways.
The debate raises crucial distributive questions: who benefits from AI, and who bears its environmental costs? Is it acceptable for the acceleration of AI in developed countries to exacerbate the global energy burden while large populations in the Global South still lack basic electricity access? How can the potential of AI for the ecological transition be balanced with its growing carbon footprint? These questions have no simple technical answers and require political negotiations on priorities, distribution of benefits and burdens, and models of sustainable development.
Conclusions: Toward a Global Digital Order?
The geopolitics of artificial intelligence represents one of the main arenas for redefining the contemporary international order, with implications far beyond the technological sphere, affecting the distribution of power, the nature of state sovereignty, mechanisms of social control, economic structures, military competition, and prospects for global cooperation.
The stakes are not simply which country will develop the most advanced AI or which company will dominate digital markets. The real competition concerns the ability to define the institutional, normative, and value-based architecture that will govern the 21st-century digital ecosystem: which rights will be protected and which sacrificed? How to balance security and freedom? How will power be distributed among states, corporations, and citizens? What model of technological development—extractive or sustainable, centralized or distributed, proprietary or open-source—will prevail?
Three alternative scenarios appear possible:
Fragmentation into separate digital spheres (“splinternet”), with rival technological blocs operating according to incompatible standards, generating economic inefficiencies, barriers to innovation, and risks of escalation due to mutual misunderstanding.
Hegemony of a dominant model (likely Sino-American), imposed through combinations of technological superiority, infrastructural dependencies, and market power, marginalizing the preferences of other actors.
Negotiated multilateral governance, based on compromises between different models, interoperable standards, and shared accountability mechanisms—most desirable but also hardest to achieve given the erosion of trust and multilateral institutions.
For the Arab world and the Global South more broadly, the key challenge is to avoid mere technological subordination by developing capacities for critical and selective appropriation of AI that respond to self-determined development priorities rather than external market or security imperatives. This requires investments in technical education, applied research, digital infrastructure, and the development of autonomous regulatory frameworks that balance innovation and rights in culturally appropriate ways.
Artificial intelligence is not an inevitable technological destiny but a field of collective political choice. Its trajectory will depend on decisions, power dynamics, and social mobilizations in the coming years. From this perspective, AI geopolitics is not merely an academic subject but an arena of political agency and the construction of alternative futures. Avoiding the trap of technological determinism, it is not correct to say that “AI will change the world”; rather, different conceptions of AI are already shaping different worlds, embodying profoundly divergent political visions. This is an important methodological point: technology is not neutral but incorporates value choices and power relations.
Note:
1 – Nye, J. “Power and Interdependence in the Digital Age”. Foreign Affairs, 2019. Nye sviluppa qui il concetto di “cyber power” come nuova dimensione del potere statale, equiparabile storicamente al controllo di risorse strategiche tradizionali.
2 – Zuboff, S. The Age of Surveillance Capitalism. New York: PublicAffairs, 2019. Opera fondamentale che teorizza l’emergenza di un nuovo regime economico basato sull’appropriazione privata dell’esperienza umana trasformata in dati comportamentali.
3 – Creemers, R. “China’s Social Credit System: An Evolving Practice of Control”. SSRN, 2018. Analisi dettagliata dei meccanismi tecnici e politici del sistema di credito sociale cinese e delle sue implicazioni per la governance autoritaria.
4 – Veale, M., Borgesius, F.Z. “Demystifying the EU AI Act”. Computer Law Review, 2021. Esame critico del processo legislativo europeo e delle tensioni tra protezione dei diritti e competitività economica.
5 – Miller, C. Chip War: The Quest to Dominate the World’s Most Critical Technology. Scribner, 2022. Ricostruzione storica e analisi strategica della competizione globale per il controllo dell’industria dei semiconduttori.
6 – Allen, G., Chan, T. “Artificial Intelligence and National Security”. Harvard Belfer Center, 2017. Primo studio sistematico delle implicazioni dell’IA per la sicurezza nazionale e la competizione militare.
7 – Kanna, A. “The Gulf States and Technological Modernization”. Middle East Report, 2020. Analisi critica delle strategie di modernizzazione tecnologica delle monarchie del Golfo e delle loro contraddizioni strutturali.
8 – World Bank. MENA Tech Overview, 2023. Panoramica statistica e analitica dello stato dell’ecosistema tecnologico nella regione MENA.
9 – Human Rights Watch. “AI Surveillance in the Middle East”. Report 2021. Documentazione delle pratiche di sorveglianza digitale e delle loro conseguenze per i diritti umani nella regione.
10 – Strubell, E., Ganesh, A., McCallum, A. “Energy and Policy Considerations for Deep Learning”. ACL, 2019. Studio pionieristico sull’impronta carbonica dell’addestramento di modelli di machine learning e sulle sue implicazioni per la sostenibilità ambientale.
Bibliografia:
-Allen, G., Chan, T. Artificial Intelligence and National Security. Harvard Belfer Center, 2017.
-Creemers, R. “China’s Social Credit System: An Evolving Practice of Control”. SSRN, 2018.
-Human Rights Watch. AI Surveillance in the Middle East. HRW Report, 2021.
-Kanna, A. “The Gulf States and Technological Modernization”. Middle East Report, 2020.
-Miller, C. Chip War: The Quest to Dominate the World’s Most Critical Technology. Scribner, 2022.
-Nye, J. Do Morals Matter? Presidents and Foreign Policy. Oxford University Press, 2020.
-Strubell, E., Ganesh, A., McCallum, A. “Energy and Policy Considerations for Deep Learning”. ACL, 2019.
-Veale, M., Borgesius, F.Z. “Demystifying the EU AI Act”. Computer Law Review, 2021.
-World Bank. Middle East and North Africa Tech Landscape Report. 2023.
-Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019.
* Member of the Honorary Governing Council and lecturer at the Society of International Studies (SEI).
All Rights Reserved.












