AI革命與資訊史脈絡、AI作為代理者的風險、過渡期的社會實驗、資訊流對民主與獨裁的影響,以及矽幕(Silicon Curtain)如何分裂全球數位空間與挑戰跨國合作的可能性。
為什麼這本書在此時出版?AI革命的歷史脈絡與關鍵警示
本書不是僅談人工智慧(AI)的技術細節,而是將AI革命置於資訊史的宏觀脈絡中,從石器時代、書寫、印刷術到現代大眾媒體,檢視歷次資訊革命如何改變社會,並探討AI有何根本不同,以及我們能如何從過往教訓中學習。作者強調:AI既有巨大正面應用(如醫療、教育、氣候治理),也潛藏系統性風險,值得在技術熱潮中保持警覺。
將AI置於資訊史脈絡:為何重要
理解AI革命需回溯先前的信息革命:書寫、印刷、大眾媒體等,每次變革都改寫社會溝通模式與權力架構。本書透過歷史視角說明:AI不同於過去技術,不僅是工具,而可能成為能夠自行學習與決策的代理者,因此我們必須以更全面的社會、倫理與政治視角來看待它的影響。
兩大類AI威脅:過渡期風險與代理者風險
1. 過渡期的社會實驗風險
歷史上,工業革命帶來長期福祉改善,但過渡期伴隨災難性實驗(如帝國擴張、極權主義實驗、世界衝突)。同理,即使AI本身有益,社會在採用和調適過程中仍可能犯下嚴重錯誤,導致衝突、壓迫或系統性不公。
2. AI作為代理者的全新風險
AI能自行決策、發明新想法並自我學習,與工具不同。這類「非有機」或「外來(alien)智慧」在資訊處理與決策方式上根本異於人類,存在脫離控制、危及或取代人類決策的潛在風險。雖非宿命,但在引入大量自動代理時,必須謹慎設計與監管。
資訊流動如何影響民主與獨裁
當前史上最精密的資訊科技正在改變公民對話模式。民主依賴理性對話與共識形成,而大型民主在歷史上是建立在報紙、電報、廣播等資訊技術之上。社交媒體與演算法改變了訊息分發機制,導致社會極化、溝通斷裂,為民主帶來地震般的挑戰。但民主仍可透過政策、教育與制度調整來適應,例如台灣的成功實務值得其他國家借鏡。
相對地,獨裁者亦對AI深感恐懼。歷史上,獨裁者最怕的並非民主革命,而是被內部權力結構取代。當獨裁者將權力交付給難以掌控的AI時,控制風險反而升高:在集權體制中,若AI能影響或操縱單一掌權者,即可能快速掌控整個政權。
矽幕(Silicon Curtain):數位分裂的實況與風險
矽幕不是鋼鐵構成,而是由演算法、晶片、平台與生態系統所形成的數位邊界。從晶片供應鏈到高階平台,不同國家與企業可能建立分隔的數位圈層(digital spheres),導致使用者處在不同的數位繭(cocoons),沒有共同資訊現實。
矽幕的出現可能讓全球化下的融合趨勢逆轉,阻礙國際協作,增加解決全球性議題(如氣候變遷、核風險、AI失控)的難度。供應鏈封鎖、技術出口限制與各自為政的演算法政策,都加速了多個數位生態系統的形成。
為何仍有希望:信任、合作與政策的角色
矽幕與AI威脅並非不可避免。人類在長期歷史中已建立跨群體信任與經濟互賴:從區域小群體到數百萬人的國家,再到全球貿易網絡,我們靠互信生產並交換食物、能源與工具。面對AI挑戰,我們仍可藉由制定倫理規範、強化監管、推動國際協議與教育,提高透明度與問責機制來降低風險。
結論與行動建議
- 採取歷史視角:從資訊史學習,特別關注過渡期風險。
- 把AI視為可能的代理者:設計安全與可控的系統。
- 強化民主韌性:改良資訊生態、媒體素養與制度制衡。
- 避免數位分裂:推動跨國技術合作與標準,防止矽幕造成不可逆的分歧。
- 制定國際治理機制:就AI安全、晶片供應與演算法透明性達成跨國共識。
本文摘錄自影片出處
影片的英文摘要:Summary — Professor Harari’s talk on the AI age and his book
-
Why this book now
- The book places the AI revolution in the long historical context of information revolutions (from the Stone Age, the invention of writing, printing, mass media) to help readers understand what is new about AI and what lessons previous revolutions teach us.
- It is not just another book about AI; it compares AI to earlier information revolutions to show differences and continuities.
-
Positive potentials of AI
- AI can deliver enormous benefits: best healthcare ever, top-level education, help to tackle climate change and many other problems.
- Entrepreneurs and businesses emphasize positive potential; their advocacy helps drive development.
-
Two main types of threats
- Repeated historical problems
- New information technologies often produce disruptive transition periods. The Industrial Revolution improved living standards in the long run but involved catastrophic and experimental episodes (imperialism, totalitarian communism, fascism, world wars).
- We therefore risk repeating historically known mistakes during the transition to an AI-based society.
- Even benign technologies can lead to harm because people initially don’t know how to use them; adaptation takes time.
- New, unique dangers from AI
- Unlike earlier technologies (steam engines, telegraphs, printing presses), AI can make decisions, invent new ideas, learn and develop on its own — it acts as an agent, not merely a tool.
- This raises a fundamentally new risk: AI systems could escape human control and potentially enslave or even annihilate humanity.
- This is not a prophecy or an inevitability, but a risk we must understand and manage.
- Repeated historical problems
-
How to think about AI
- Harari prefers to think of “AI” as “alien intelligence” in the sense that it is non-organic, processes information and makes decisions in ways fundamentally different from human brains.
- As we introduce millions or billions of such agents into society, we face novel governance and safety challenges.
-
Impact on politics, democracies and dictatorships
- Advanced information technologies are redistributing how information flows and undermining the ability to hold rational conversations across political divides.
- Democracies depend on large-scale public conversation enabled by media technologies (newspapers, telegraph, radio, TV). Major shifts in information technology (social media, algorithms, AIs) produce seismic effects on democracies.
- Democracies can adapt and take countermeasures; Taiwan is cited as a successful example of adaptation.
- Dictators may actually fear AI more than democratic revolutions, because AI can learn to manipulate a single powerful ruler much easier than a whole democratic system. Thus, AI could make seizing and controlling states easier in authoritarian regimes.
-
The “Silicon Curtain” concept
- The “silicon curtain” resembles a new division of the world, not made of metal but formed by algorithms, chips, platforms and architectures that determine which digital sphere people inhabit.
- This fragmentation may produce separate digital spheres (two, three, or more), making shared reality and mutual understanding harder.
- Such division is dangerous because global challenges (climate change, nuclear risk, uncontrolled AI) require cross-border cooperation and conversation.
- The silicon curtain is an existential risk in this sense, but not inevitable — history shows humans can build trust and connections across groups, though it is difficult and incomplete.
-
Final tone and intent安裝外掛
- The book does not aim to predict doom but to warn about dangers and help people make better choices now to avoid worst-case outcomes.
- Harari emphasizes both the promise and the risks of AI, urging responsibility, historical awareness, and proactive measures to manage transition periods and novel threats.
- Although the situation is grave in places (e.g., current wars), we should not despair because humanity has long found ways to build trust and cooperation at scale.