The book “Superintelligence”, by Nick Bostrom, essential for the community of those who are concerned with the risks of artificial intelligence, begins with a fable: a tribe of sparrows, tired of living a marginal life, becomes convinced that everything could be better if the sparrows had an owl to help them —build nests, help them feed and care for their young , protect them from other predators. Excited by the idea, the sparrows decide to go out looking for an egg or a baby owl that they can raise as one of their own.
Only Scronkfinkle, “an irritable one-eyed sparrow,” notes that perhaps they should consider the risks of living with an adult owl and think first about how they might tame and tame an owl. But the other sparrows brush over their fears, saying that simply getting an owl will be hard enough and there will be time to worry about taming it after it has been found and raised. So, while other sparrows fly around looking for eggs and young, Scronkfinkle and some of his colleagues try to think through the problem of domestication. It’s difficult, lacking an owl to work with and fearing that at any moment the other sparrows might return with a baby owl and brutally test their theories.
The story is a fable that perfectly represents what AI alarmists believe is happening right now. The growing power, publicly manifested until now in chatbots and image generators, is a baby owl that is growing in our nests, and our alarmists are not yet ready to tame it. It is in the spirit of Scronkfinkle that a group of Silicon Valley notables, including Elon Muskhas just signed an open letter asking for a at least a six-month break from large-scale AI experimentsto allow time for our security protocols to be properly updated.
But there is a crucial difference between the fable and our own situation that helps to explain why the human voices calling for a break are even more difficult to hear than Scronkfinkle’s. Note that sparrows, despite their naivety, at least know what an owl looks like, what it is and what it does. So it shouldn’t be hard for them, and it’s not hard for the reader, to imagine the powers a wild owl could wield—known powers of swiftness, vision, and strength with which the owl could rip apart and devour an entire tribe of hapless sparrows. .
But with a hitherto abstract “superintelligence,” the crux of the matter is that there is currently no analogous entity that we can observe, understand, and learn to fear. The alarmists don’t have a simple risk scenario, a clear description of this critter’s claws and beak—they just have a series of highly uncertain scenarios based on even more uncertain speculations about what an intelligence somehow greater than ours can do. be able to do.
This does not mean that your arguments are wrong. In fact, we could argue that uncertainty itself makes superintelligent AI that much more fearsome. But usually when we humans turn against a technology or seek to limit its reach, we have a pretty good idea of what we fear might happen, the kind of apocalypse we want to avoid. The treaties banning nuclear tests were sealed after Hiroshima e Nagasaki, not before.
Or, to take a less existential example, the current discussion about limiting children’s exposure to social media is felt to be important because we’ve been living with the internet and the iPhone for some time; We already know a lot about the negative aspects of online culture. But it’s hard to imagine that it would have been possible to convince anyone to regulate the TikTok preventively in 1993.
I write this as someone who has a hard time understanding the specific catastrophes we might suffer from if the AI alarmists are right. I have trouble even understanding precisely what we mean when we talk about “superintelligence.”
Part of my uncertainty has to do with discussions of machine consciousness and whether AI would have to gain self-awareness to become genuinely dangerous. But it is also possible to distill uncertainty into narrower questions that do not require taking a position on the nature of the self or the soul.
Let’s look at one of them, then: Will supercharged machine intelligence have a much easier time predicting the future?
I think my own intelligence is not especially suited to making that kind of prediction. When I think back to what I’ve written in the past myself, I see that I’ve done well at describing large-scale trends that end up influencing events – like the transformation of the Republican Party in a lower-level working-class coalition, say. But when broader trends produce specific facts, I just guess, like everyone else: while I understand the forces that made possible the rise of Donald Trumpstated that he would not be the Republican presidential candidate in 2016.
There are, however, forms of intelligence that do better than mine when it comes to making concrete predictions. If you read the work of Philip Tetlock, who studies “superforecasters,” it becomes clear that certain habits of mind yield better forecasts than others, at least when their futurology is expressed as percentages of the average of a wide range of forecasts.
Thus, at the start of the Syrian War, an average political analyst might have estimated the probability of a 40% Bashar al-Assad lose power within six months. But the superforecasters, looking a little deeper into the situation, assessed the probability at just under 25%. Assad’s subsequent survival in power does not in itself prove that the superforecasters were right—perhaps the dictator had merely defied the odds—but it does help raise his overall batting average, which, in a range of similar scenarios, is higher than the average of analysts.
But it’s not so much higher that a statesman can simply rely on superforecasters to have some kind of geopolitical winning streak. So an imaginable goal for a far superior intelligence would be to radically improve this kind of merely human prognosis.
We know that AI already has pattern recognition powers that surpass their human creators and sometimes baffle them. For example, for reasons still unknown, AI can predict a person’s gender, with an above-average degree of accuracy, based only on their retinas. And there is growing evidence that it could do remarkable diagnostic work in medicine.
So imagine larger-scale pattern recognition being applied to global politics, predicting not just a vague probability of a dictator’s downfall, but a specific type of plot, in this specific month, with such-and-such specific conspirators. Or a specific military outcome, in a specific province, followed quickly by specific events.
In this scenario, a superintelligence would be functioning as a version of the “psychohistory” imagined by Isaac Asimov in his “Foundation” series, which allows its architect to guide future generations through the fall of a galactic empire. And such a prophetic gift would have obvious applications beyond politics: to make predictions about the stock market, for example, or with the kind of anticipatory crime prediction engine envisioned by Philip K. Dick and later, in adaptation, by Steven Spielberg.
But is any intelligence, supercharged or not, capable of such predictions? Or is the world so irreducibly complex that even if you pile up pattern recognition and have the AI run endless simulations, you’ll still end up with probabilities that aren’t much more accurate than what can be achieved with human judgment and intelligence ?
I presume the second hypothesis is correct. That the positive results of any kind of intelligence as a forecasting tool would be diminishing, that the world does not lend itself to being predicted in such detailed ways. When a chatbot reveals, Sherlock Holmes style, the detailed evidence that our human powers have missed and that elucidates the Nord gas pipeline explosion Stream or explain the disappearance of Malaysia flight 370 Airlinesso I’ll start to expect psychohistory in a future version of the chatbot.
But it seems more likely that the AI will not have the power of actual prophecy, and any doomsday scenario that requires Machiavellian prophetic power from our overlord candidate is not very credible, however “super” its predictive power becomes. .
Or maybe I’m just a sparrow who’s never seen an owl firsthand and doesn’t think she can see so well in the dark.
If you want some motivation, then here is your way: Frases Positivas