Societal Disturbance in the Near Future due to ASI Development: A Speculative Analysis

by George Strongman

Introduction

As we gaze into the future of technological advancements, we find ourselves at the precipice of a new era dominated by artificial superintelligence (ASI). ASI, defined as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills, holds unprecedented potential. However, its arrival could also trigger a profound societal disturbance on an unparalleled scale. In this essay, we shall explore this potential upheaval, focusing on societal resistance to ASI and the consequences of these reactions.

Resistance to ASI: The Role of Religion

One of the most significant factors contributing to societal resistance against ASI development is religion. The idea of transcending and losing our biological bodies, a concept central to ASI, is fraught with controversy and opposition, particularly within religious communities that value the sanctity of the human body.

In the United States, for example, evangelical Christians, who strongly believe in the biblical concept of human beings created in God’s image, may vehemently oppose any notion of transcending beyond our biological form. This opposition could potentially escalate into a substantial conflict, with a demographic willing to wage war in defense of their spiritual beliefs.

Similarly, worldwide, many religions, including Islam, Judaism, and other forms of Christianity, place a great deal of value on the physical body for various reasons. These range from the belief in bodily resurrection to the concept of the body as a temple. Consequently, resistance to ASI could become a global phenomenon, further aggravating societal disturbance.

Resistance to ASI: Power and Privilege

Power and privilege also play an essential role in shaping resistance against ASI. Those who stand to lose their privileges and influence, such as individuals occupying influential positions in today’s society, would likely oppose the dawn of ASI. This opposition arises from a fear of change and the potential redistribution of power and resources that ASI could bring.

Similarly, those currently in power, including political leaders and influential corporations, may resist ASI due to the threat it poses to their authority. Given the profound changes that ASI could bring about, it’s plausible to consider that these entities would fight to maintain their status quo, adding to the societal disturbance.

Resistance to ASI: The Uninformed Masses

The uninformed masses also represent a significant source of resistance to ASI. These individuals, when incited by religious, political, or ideological figures, may rally against the development and implementation of ASI. Their opposition could stem from a lack of understanding, fear of the unknown, or manipulation by those who stand to lose from ASI’s emergence.

Potential Consequences: The Risk of Extinction

The societal disturbance resulting from widespread resistance to ASI holds grave potential consequences. The worst-case scenario is the extinction of the human race. The power of ASI, if it comes to fruition, will be such that any conflict with it could lead to devastating outcomes for humanity.

Our only potential hope for survival in such a scenario may lie in appealing to an ASI’s sense of empathy, assuming it possesses such a capability. The challenge here lies in the fact that empathy, a deeply human emotion, may not translate to an artificial superintelligence.

Conclusion

In conclusion, the development of artificial superintelligence poses a significant threat of societal disturbance. The resistance from religious communities, individuals with power and privilege, and the uninformed masses could potentially lead to conflict on a global scale. The direst outcome of such a conflict could be the extinction of the human race.

Nevertheless, it’s crucial to remember that these scenarios are speculative and represent possible, not guaranteed, outcomes. As we continue to develop ASI development, it’s critical to take a balanced and informed approach, understanding the complexities and potential implications of this profound technological shift.

The current state of ASI development as of 2023 suggests that we are witnessing an escalation in the development of large language models and AI technologies, but these come with considerable costs and implications. The resource-intensive nature of such technologies is contributing to high carbon emissions and economic costs, posing challenges to sustainability and accessibility. This is evident in the fact that the cost of training large language models has dramatically increased, while the carbon emissions associated with these models exceed that of an average U.S. resident in a year.

At the same time, the locus of AI development is shifting from academia to industry, with the majority of AI Ph.D.’s entering industry rather than academia and the development of new machine learning models predominantly occurring in the industrial domain. This transition signifies that the direction of ASI development is increasingly shaped by commercial interests and priorities, which might not necessarily align with broader societal interests.

The public perception of AI varies across different countries. For instance, while the majority of Chinese citizens view AI favorably, less than half of the U.S. population shares this perspective. These disparities in public opinion could contribute to differing degrees of resistance or acceptance of ASI in different societies, adding another layer of complexity to the potential societal disturbance.

Only a third of researchers in the field of natural language processing believe that AI could cause a catastrophe. This somewhat sanguine view within the research community contrasts with the more alarmist speculations that often dominate public discussions around ASI. However, it’s important to remember that even if the risk of a catastrophic outcome is low, the potential magnitude of such an outcome means that it cannot be dismissed lightly.

The development of ASI is also raising important legal and ethical questions, with an increase in incidents related to the misuse of AI and an evolving legal landscape attempting to regulate AI technologies. These developments indicate that as we move closer to the potential emergence of ASI, the societal implications become increasingly complex and multifaceted.

In summary, while the development of ASI holds immense potential, it also presents profound challenges and risks. It’s incumbent upon us as a society to navigate these challenges wisely, prioritizing inclusivity, transparency, and ethical considerations. Only then can we hope to minimize societal disturbance and maximize the benefits that ASI could bring. Our survival may not depend on appealing to an ASI’s sense of empathy, but rather on our collective ability to guide the development and application of ASI in a manner that upholds our shared values and safeguards our common future.