Fears, Promises, and Emerging Tech
Fears, Promises, and Emerging Tech
When Gloria Calhoun decided to pursue a Ph.D. after a career in telecommunications, she initially planned to explore how the system of overhead telephone wires was moved underground. However, when she began studying how the overhead network was built in the first place, she was surprised to find little information on the topic.
"You would almost think that all those wires and poles just appeared by magic," Calhoun said.
As a student in the School of History and Sociology, Calhoun researches how the emergence of telecommunications infrastructures shaped the wire and cable industries. While digging into the history of overhead wires, she was struck by the way people created narratives about the "fears" and "promises" of emerging telegraph and telephone infrastructures — and how much they resembled the competing narratives around artificial intelligence (AI) today.
Fears, Promises, and Public Discourse
"Your viewpoints on the fears and promises of technology are often shaped by what you stand to gain or lose, so they vary from person to person and by time and place," Calhoun said. "With telecom infrastructure, some people's wildest hopes were realized, while others' worst fears came true. Some made lots of money from the networks, and others died building them."
The fears and promises around emerging technologies range from high expectations to deep skepticism. For example, telegraph technology promised to send messages quickly, annihilating time and space in communication, Calhoun explained. But others worried about how the wires degraded urban environments and how people could use the telegraph to distort the free flow of market information. With AI, the discourse is similar. We hope the technology will eliminate mundane tasks or solve problems we haven't yet cracked. However, concerns remain: Can we prevent bias in AI algorithms? Will job replacement disrupt the labor market?
Technology Doesn't Appear in Isolation
Just like telegraph debates raged through newspapers in the 1800s, the release of OpenAI's ChatGPT language learning model in 2022 unleashed a blizzard of opinions in the media, with even some prominent AI researchers changing their minds about the technology.
What factors cause this range of opinions? Calhoun explains that the technology's use, its physical presence, its symbolic meanings, and its interactions with other technologies can all affect how we perceive it.
New technology is unpredictable because people often co-opt it in ways their inventors didn't intend or anticipate. Its physical presence also affects how we perceive it — even if it's invisible, "which can sometimes seem like a good thing, as for urban aesthetics, but can also make it feel more sinister," Calhoun said.
Symbolic meanings also vary, with a typical example being railroads, Calhoun explained. Were they a symbol of progress or a machine in the garden, as the historian Leo Marx famously phrased it? And finally, just as technology is often used in unintended ways, it also interacts with other tech in surprising ways. For example, low-voltage telecommunications lines aren't hazardous alone but can be lethal if crossed with high-voltage power lines, Calhoun said.
In a nutshell, "There is no blank slate. Technology doesn't appear in isolation or operate in a vacuum. We don't go overnight from inventing new technology to having it fully deployed and understood," Calhoun said. "People's perceptions change as their wants, desires, and expectations change."
The Regulatory Challenge
And then what? Fear and promises anticipate, but regulation reacts, Calhoun said. And often, there is a lag between the two.
"No matter how obvious it might be that somebody needs to do something, it's often not at all obvious who that somebody is or what they need to do," she said.
Calhoun explained that the main challenge for regulators is balancing competing stakeholder interests and dealing with a technology that evolves as they try to regulate it. For example, city leaders in Montreal tried to move the wires underground for 30 years. The overhead system was dangerous, with wires blocking firefighters from reaching burning buildings and deteriorating poles posing a safety hazard.
"And they hemmed and hawed and dithered, and finally, the fire insurance underwriters said, 'If you don't move these wires underground, we will no longer offer casualty insurance in the city center.' And that finally moved them to start doing something about it," Calhoun said. "That's a long way of saying that when the consequences of doing nothing outweigh the consequences of change, even with all the uncertainties the change may bring, that's when it usually starts to happen."
Kranzberg's Law
Mel Kranzberg, the founder of the History and Sociology of Technology and Science program at Georgia Tech, published Kranzberg's Laws in 1986. The first states, "Technology is neither good nor bad, nor is it neutral."
"That view was evolving then, and it's the same thing we're still discussing here," Calhoun said. As narratives of fears and promises play out with AI, it's impossible to label the tech itself as good or evil "Because the social component of it is not the technology but the use. Technologies are human productions, and humans are neither all good nor all bad. So you have to focus on how we use it more than the technology itself."
From telegraphs and telephones to social media and artificial intelligence, deploying new technologies involves a learning process of trial and error. Most people will use it for good — "or at least try to," Calhoun said — and just like telegraph infrastructure, AI will be another lesson to look back on and dissect when the next society-changing creation comes along. But the history of technology has already taught us at least one thing, she said.
"New tech does not simply follow a predetermined path. It increases the range of possible paths but does not dictate which we follow or what will happen next. That's up to us."
Contact
Ivan Allen College of Liberal Arts