Movies Online, Cartoons, Documentaries
/ Documentaries / Science

What is Artificial Intelligence?

Science

Artificial Intelligence is becoming a way of life, with new systems and uses popping up everyday. From AI chatbots to machine learning, this technology has the potential to shape our future ... positively or negatively.

Dr. Michael Littman, with the U.S. National Science Foundation, talks about the early days of the field and how humanity has a role to play in developing the AI of the future.


Transcript

My name is Michael Littman. I'm the division director for information and intelligence systems at the National Science Foundation.

So there isn't a bright line between what is artificial intelligence and what is computing? More generally, what is automation more generally?

But usually we talk about something as being AI. If the level of inference that the system has to do is sufficiently sophisticated.

So if you're talking about something like a doorknob has the ability to react right, you reach out to the doorknob, you twist it, and suddenly the door opens.

We wouldn't call that AI because it's such a direct connection between your input and the output.

But if we're talking about something like, I need to find the best way to ride a bike to get to downtown Washington, DC.

It has to make lots of trade offs, lots of reasoning, drawing lots of different kinds of data to be able to answer that question well. And so maybe we would call that AI.

We certainly would have 20 years ago today. Now we just call it an app on your phone.

So artificial intelligence has been a pop culture staple for forever. So certainly since the earliest days of the field, because I was appearing just as sci fi as a genre was appearing, and the two just seemed such, such a natural fit.

And you hear a lot of stories where the AI is the bad guy, Mr. Anderson.

So the AI goes off the rails and it subjugate humanity, or humanity has to kind of rise up to fight it, and you get the Terminator situations.

But there's also been plenty of stories where the AI is a partner and helpful, and maybe even helping to extend human reach into the galaxy.

So I think a lot of people have latched on to AI because it really does spark the imagination. It really is this.

Well, wait, maybe we're not the only thing that can kind of think about things. Maybe there's maybe machines can think about things too, if we configure them the right way.

Until you were born, robots didn't dream, robots didn't desire unless we told them what to want.

That's weird. What are we then? Like? How? How are we different from those machines? How are we like those machines?

It raises really deep, interesting philosophical questions. One of the things that I've been finding extremely exciting in this particular moment when everybody around me is talking about AI,
went for the first 35 years that I was in the field, nobody was talking with me about AI except for my fellow researchers.

But now everybody's got an opinion about it and they're all starting to come back again.

Questions like, well, what is intelligent life? We thought we kind of knew, but now we have this, this, this machine that can kind of do the things that we thought were intelligent, but they're not really intelligent or are they intelligent? And we're not even really sure, because it turns out there is no formal mathematical definition of intelligence.

So it's really causing people to go back and rethink these things. And I think science fiction, pop culture in general has a really interesting role to play in helping us as a species and as a culture, digest these ideas and make sense out of them.

Personally, I think that one of the things that we're going to see happening in the near future is less
focus on these very general systems that can do anything and talk to you about anything which are super sexy and really interesting.

The fact that this can work at all, but none of them are actually tailored to any one task that you might actually want to do.

Whatever your job is, however you're working, these systems could potentially give you some support, but they're not designed for it yet.

I think what we're going to see in the next couple of years is more and more systems that are being built specifically, oh, you're doing these kinds of statistical analyzes in a spreadsheet.

We've got an AI system that can actually make that much easier for you to get going. You're writing things.

You're you're documenting various things.

We're going to give you an AI system that can actually help with your particular job and what you need to get done.

But by the end of the day, and you don't have to figure out the magic way to prompt this very general system so that it gives you a C level answer.

You get something that people have actually tested to and made sure that work for your task. People are finding ways to make these systems really useful for them individually, but I think it's going to become a lot easier soon as more focused systems are actually rolled out.

The only way that you can use these systems effectively to actually serve your needs is to be a participant, an active participant in sort of creating the computer systems around you, in a sense.

And so one of the ways that I think about that is the notion of programing.

So when I was just getting started in the field in the 80s or so, maybe the 70s, everybody who was participating in computing could actually tell the computer what to do, because the computer didn't really know much on their own.

Over time, it became the case that companies grew up to to make it easier for people to really kind of serve as an intermediary between you and your needs and the machine and its capabilities.

And I think that's great because systems are much easier to use now.

They're much more attractive. They're much more human in many ways. It's great.

On the other hand, there's a company between you and your computer, and the companies are encouraged to make the computer as useful as necessary, but also to further their ends as far as possible, their own profit motive.

And so to the extent that they're in the middle, we have less and less control.

And I think as a result, our relationship with these machines has actually become a little bit dysfunctional.

People, people getting stuck online, like just just clicking all day when they really should be doing something productive or healthy.

I think that's partly due to the fact that that they're working through these software systems that were built by companies to keep their attention.

And the companies are really good at that.

Some of the best minds of our generation are trying to figure out how to get you to click more effectively on, on things online.

And so from my perspective, it's really on us as individuals to be able to have some say over the behavior of our system.

I don't want it to recommend these kinds of news articles to me, and I want to be able to say that I want to be able to program that in a sense.

And so from that perspective, I think if we can get people more on board with that, first of all, it'll be important for the companies to support us because that's what everyone's going to demand.

And second of all, it really puts us in a much more empowered position for deciding what information we're getting access to.

Howard digesting it, how we're consuming it, how we're making use of it.

I think that would be a much healthier situation than where we are now.

We're becoming more and more dependent as a society on computing and even AI specifically.

And so the more that we as a society understand this technology, the more, first of all, that we can be just effective in our lives.

We can actually take advantage of it to solve the problems that we individually have.

But second of all, we can help shape the discussion about what is okay and not okay to be doing with these technologies.

And this is something that I've been very excited to be seeing in the last six months or so, where people are getting on board, they're getting to understand what these systems can do and what they're good for and what they're not good for.

And they're using that knowledge to shape the debate about all these different ways in which we need to set policy about what's okay and what's not okay, are being shaped by the conversations that people are having.

Our chat bots have existed since the very beginning of the field, and traditionally the way they've been written is through human coding.

So a person would sit down and write a bunch of rules that say, if somebody asked me a question like this, I should do this in response, or if it's being used as a, say, the customer service tool, here's access to all the sorts of questions that people ask. Here's how I'm going to answer them.

However, if you simply train a large machine learning model on lots and lots and lots of text to just predict what the next word would be in a naturally occurring piece of text, what you discover is that you can kind of have conversations with the system that comes out because you can actually it's it's it's making its predictions based on the context, the words that came before it.

You can set it up with a question and then it can actually fill in the answer. And people said, this is amazing. We can turn this into a chat bot.

Now you get out an incredibly fluid, lucid kind of chat bot, which is remarkable.

We did not know how to build systems like that before.

Unfortunately, it has some complementary weaknesses with respect to the old style chat bots, in that we don't really know what they're going to say next, right?

It really is determined by the statistics of the words that the systems were trained on. And they're trained to produce an answer regardless of what the context is.

So if they get themselves in a situation where the next word isn't really obvious, it'll it'll get something and that something could lead to an answer.

That is not what we would think of as factually grounded. But the system was never built to be factually grounded. It was built to be fluent. It was built to very accurately predict what kinds of words will follow a given context you are taking, you know, a huge risk if you decide to have the system write a paper for you, or a legal brief or a grant proposal to the National Science Foundation, you can't trust these systems to do what would a a top level human being would do?

The fact that it can answer anything accurately at all is kind of remarkable. It's really going to get people inspired.




YOU MAY ALSO LIKE

What are microplastics? How do they harm the environment?
While dinosaurs’ days of dominating the Earth ended 65 million years ago, leaving us few fossil remains to give us clues about how they lived until now.
Scientists have uncovered the properties of a rare earth element that was first discovered 80 years ago at the very same laboratory (...)
Water: It's an essential building block of life, constantly moving in a hydrologic cycle that flows in a continuous loop above, across and even below the Earth's surface. How does the water cycle work?
The glacier has been one of the island's largest contributor's to sea level rise, but in a new study, it grew slightly and the rate of mass loss slowed down.
Concrete is the second most-used substance on Earth, after water...

© 2000-2024 Titi Tudorancea TV | Titi Tudorancea® is a Registered Trademark | Terms of use and privacy policy | Contact