Advisor Services

To expand the menu panel use the down arrow key. Use Tab to navigate through submenu items.

Artificial intelligence overview: Defining AI and the challenges it presents

Submitted by Brian.Lavelle on July 9, 2019

Transcript:

Artificial intelligence overview: Defining AI and the challenges it presents.
R. David Edelman, Director, Massachusetts Institute of Technology
Internet Policy Research Institute
 
GRAPHIC:  Defining AI and the challenges it presents.

Most artificial intelligence, as people experience it today, is the intersection of two things:
A tremendous amount of data (what we used to call big data)— more data than any human being could ever collect or process on his/her own or manually
Intense computational power that is specifically being used in machine learning  
 
GRAPHIC: [The intersection of big data and machine learning]

Machine learning is a set of techniques that are fundamentally math, but math with incredible computational power on top of it to predict things.

Machine learning takes these vast quantities of information and finds subtle correlations and patterns in order to predict certain outcomes. It turns out prediction is a very valuable thing, not just in terms of human behavior, but in terms of digital/technical outcomes.

AI, as most people experience it right now, whether it’s optimizing their newsfeed or recommending great restaurants, is the interface of big data and machine learning.

 
GRAPHIC: [AI limitations.]
 
The truth is the basic science is really somewhat primitive. That isn’t to say there aren’t people who understand the deep math or what’s happening inside the silicone, but it’s very hard to get an artificially intelligent system to explain to you why it made the decisions that it did. It could matter a lot. When you’re behind the wheel of an autonomous vehicle, you want to know how that car is making the decisions that it does. You also want to know that system is pretty robust. It’s pretty easy to fool a machine learning system, particularly in computer vision to think that it’s seeing something different. For example, a stop sign with a couple of pieces of tape on it could suddenly become a 45 mph speed limit sign.  
 
Ultimately, they will be fixable. We have dealt with challenges like this before. But we are at a place where the science is just beginning and yet the demand within industry to use these technologies, to sprinkle that AI fairy dust to transform business models, is moving just as rapidly if not more. That’s going to be a continual competition. More and more we need companies to be asking the questions as well— is AI not just ‘capable’ of solving this problem but ‘should it’ solve this problem?
 
I don’t really think they have general intelligence- another way to put that is that they don’t have much humanity. They don’t have much ethics. They don’t have a greater sense of what the right and wrong thing to do is. And one of the great risks we have with artificial intelligence is that because they optimize exactly for what they’ve been told to optimize for, they don’t think about the broader considerations.


GRAPHIC: [AI ethics.] 

I think what we’ve seen in this most recent spat with social media and the broader tech-lash (technological backlash), is a growing sense of responsibility. There is a broader social, political and ethical consequence to a lot of these tools.

I think a lot of companies have been admitting in recent months that maybe they didn’t think hard enough. Maybe they didn’t “game-out” exactly how those tools could be abused— either by criminals or by a sophisticated nation state trying to influence an election. That’s not normal stuff to think about, but it is the kind of thing that companies that have a global influence/impact on billions of people need to think about.

The truth is as more of these tools are put in place, there are more opportunities for them to let us down. These tools need to be refined. They require adult supervision. The obligation that every engineer at a technology company and beyond tech companies needs to be asking themselves now is not just “Have I optimized that problem?”—  but “Have I solved the problem in the right way, and have I thought about how this tool might be abused?”
 
GRAPHIC: [Charles Schwab    Own your tomorrow.]

Show Social Media
On
Show
Hide
Include in Schwab Investing Insights email alerts
Yes
(1218-8JU3)