Managing the Unintended Consequences of Technology

Last month, I attended the first annual Unintended Consequences of Technology (or UCOT) in San Francisco. I can’t say enough about the high quality of the content or the importance and timeliness of the topics discussed. The attendees were a fascinating collection of entrepreneurs, business executives, journalists, policy-thinkers, human rights activists, artists and entertainers, students, and academics.

The goal of the event, as far as I could tell, was to facilitate a thoughtful conversation about the role of advancing information technologies, the role of the internet in the 21st century, and some of the (mainly negative) unintended consequences of emerging technologies—even and especially those developed by well-intentioned technologists and innovators. The event never entered a state of panicked dystopia, and most of the speakers explicitly articulated that they are not “anti-technology.”

However, and as one speaker, John Powell, Director of the Haas Institute for a Fair and Inclusive Society at Cal Berkeley put it, “technology will not save us.” This was a key theme of the day.

Ultimately, it was stressed by many of the presenters that humanity is living through a dangerous and precarious moment. As was pointed out many times, all of the technologies discussed (AI, data science, social media, etc.) have the potential to (and do often) provide great benefit, but they are not without their risks to society.

Partially as a result of developments in technology, we’re living through a rise in global instability and an increase in threats stemming from issues like rampant misinformation online, biased and discriminatory algorithms, unemployment due to technological advancements, a lack of effective government structures, wealth inequality, a rise in authoritarianism and surveillance capabilities, and recommendation engines that nudge users in the direction of extreme content.

And yet, through all of these emergencies, there are real and practical ways to address these issues. A significant portion of many of the speakers’ talks focused on the ways in which society might address the problems described.

Here are some insights and solutions from the conference.

#1. Hire a more diverse workforce, especially in the field of technology

Currently, there is a lack of diversity in the hiring of AI and data science practitioners. Not unrelated, we’re seeing an issue with the development of biased algorithms that discriminate (largely against people of color and especially women of color).

Tess Posner, CEO of AI 4 All, spoke on these issues and recommended that companies increase their focus on diversity and inclusion in their hiring. She argued that there really isn’t an excuse anymore—for example, the excuse that the talent pipeline doesn’t exist—since several organizations (including her own) are developing these talent pipelines for women and minorities.

#2. De-bias your data sets

There was a focus on the issue of biased and discriminatory data sets, which yield biased and discriminatory algorithms. These algorithms can (and have) caused real harm in the world. If we live in a world where society has inherent biases, then data sets will often reflect those biases. Machine learning algorithms trained with those data sets will then reproduce the same biases found in society.

Kathy Baxter, a researcher at Salesforce focused on ethical AI, gave a great presentation on the topic, and delivered practical insights for what to do about it. You can read her work on medium here (and especially this article).

#3. Consider dropping four-year degree requirements and use competency-based hiring frameworks instead

Yscaira Jimenez, CEO of LaborX, explained that two-thirds (a significant majority) of the US workforce does not have a four-year degree. That’s a massive opportunity and a large talent pool that many companies are failing to use in seeking out highly creative and effective employee candidates. Tools like LaborX are looking to address the challenges companies have in finding talented candidates from non-traditional backgrounds.

#4. Make education and workforce retraining as addictive as Facebook

And speaking of finding qualified candidates for unfilled roles, “We are headed for an education and workforce re-training nightmare,” said Bryan Talebi, CEO and Co-Founder of AhuraAI. He shared that in 12 years, some estimates expect that upwards of 38 percent of today’s existing jobs could be lost to automation.

AhuraAI is working to leverage AI to personalize education and workforce re-training to make learning as appealing and stimulating as social media. If we’re going to retrain a workforce at the speed of automated job loss going forward, we’ll need to use these tools to make education addictive.

#5. Become better, more responsible, and media-literate sharers of information online

Dan Gillmor, a professor of journalism at Arizona State University, argued that while it’s certainly important for businesses like Google, Facebook, and Twitter to recognize the role they play in the spreading of misinformation and targeted propaganda online, we the people should not look to these organizations to, alone, solve these issues for us.

As consumers and sharers of information on the internet, we need to develop integrity and media literacy in how we operate online. This is about upgrading us, he said, and not looking to the platform companies to upgrade themselves (though they should do that too).

Gillmor has written about these issues on his medium page, (and specifically here, here, and here).

One thing he noted: if you get caught posting or sharing something you learn is a spoof, acknowledge it publicly. (I guess I’ll start—I shared the fake shark on the highway image during Hurricane Harvey. I’m not proud of that—but here we are).

And it’s probably worth mentioning that these issues are poised to worsen with the development of new technologies like the so-called Deepfakes, a technology first reported by journalist Samantha Cole at Motherboard.

Another speaker who emphasized the seriousness of the issues ahead was Aviv Ovadya, the former chief technologist at the Center for Social Media Responsibility. I highly recommend this Buzzfeed profile by Charlie Warzel. It’s a sobering read.

#6. Companies should develop a product impact advisory board

Ovadya’s presentation was one of the most eye-opening of the day, but he also left the audience with what seemed like a very tangible, reasonable, and practical solution—that companies should employ something like a product impact advisory board. This panel of external experts would have the job of exploring and advising a product development team on the unintended harms of the products they are building. This external board would not have the ability to make product changes themselves, but would be there to inform.

Ovadya made the point that, ultimately, companies should elevate the negative consequences of their technologies to the same level of urgency that legal and security issues are afforded today.

By the end of the day, the audience had heard from a wide variety of speakers on a range of topics (and I forgot to mention Roderick Jones’s fascinating question about whether the second amendment applies to cyberspace—should citizens be able to own their own cyber-weapons online?) and explored many of the unintended consequences of new technologies.

It was fascinating to note how many of the suggested solutions are reliant on the same technologies that are creating the issues and threats in the first place. It’s often pointed out (as was true of this event) that technologies aren’t themselves good or bad—it’s how we use them.

The problem then seems to be that we don’t need better technologies, but rather first we need to become better humans. So let’s just do that…

Image Credit: IR Stone / Shutterstock.com

Aaron Frank
Aaron Frank
Aaron Frank is a researcher, writer, and consultant who has spent over a decade in Silicon Valley, where he most recently served as principal faculty at Singularity University. Over the past ten years he has built, deployed, researched, and written about technologies relating to augmented and virtual reality and virtual environments. As a writer, his articles have appeared in Vice, Wired UK, Forbes, and VentureBeat. He routinely advises companies, startups, and government organizations with clients including Ernst & Young, Sony, Honeywell, and many others. He is based in San Francisco, California.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured