Intersection of AI in Large Language Models

Intersection of AI in Large Language Models

How AI language models interpret meaning: Updates on AI & Technology by Outreinfo

There’s a lot of chatter about AI right now. the world is not used to consuming language that wasn’t produced by a human. Suddenly we all are talking to something that feels very much like there’s a human on the other side. These large language models can give you goosebumps. Updates on AI & Technology by Outreinfo often delve into this topic, exploring the variation of how these models work and their implications.

This question of whether language models can actually have meaning, learn meaning, encode meaning is probably one of the most fundamental or most debated questions right now. There’s this crazy opportunity to learn a ton about AI and about humans. What does it mean to be intelligent? What does it mean to think and to reason? How do you know that a system is behaving intelligently?

We’ve been taking these generative AI systems and trying to understand really how they work under the hood. They slurp up an enormous amount of data, like the entire internet. But all they’re actually doing at a very basic level is literally to just predict the next word in a sentence. So they’re just learning a very good probability distribution over what those next words look like.

What is going on when you ask a language model a question and it gives you an answer? There’s some process that connected that input to the output. We’re trying to figure out what that process is. And then we’ve been doing a lot of work with cognitive science, with neuroscience to try to ask similar questions about humans and then just say, how similar are these two processes?

The relationship between AI and neuroscience – it’s always been there because both of them are about intelligence and trying to build up computational models of intelligence. The kind of similarity has become even stronger in recent years because the nature of the artificial intelligence models we have right now are things that, to a large extent, we stumbled upon. So they are not carefully engineered systems the way AI systems of the 1980s and ‘90s were.

We’re just like, “Oops! It’s now speaking language as well as humans.” And so now we’re very much in the position that neuroscientists are, which is: “I have this system, it’s a complicated system. It has a behaviour, and I need to reverse engineer – I need to come up with a theory that explains the behaviour and predicts the behaviour.”

There’s this kind of feeling that because they’ve only had access to text and not to the actual world, large language models can’t really encode meaning, because meaning is about what’s in the actual world. These large language models – they’re neural networks and they have billions and billions of individual numbers that are playing some role. And so we have to figure out how to sift through those and pull out pieces of them. So when I use a word like “apple,” there’s this huge web of stuff I know about apples… a crisp fall afternoon and walking out in the orchards, making pies, going apple picking with my kids…

There’s obviously a huge part of the meaning of a word that comes from sensory experience. But then there’s a huge part that doesn’t. How much could a language model that has really good access to a lot of data generated by humans who did have access to the real world – can it proxy or even get the real thing just from that text?

How to actually look for these concepts or this meaning within large language models…. These are really, really big questions and we don’t have the answers to them and we should be open-minded in trying to study them. How to actually look for that concept of meaning in large language models – This is kind of a long game. This could take many years because we’re still in the process of trying to invent the right tools to use to do this.

I don’t think these systems are human level, but I think there’s things that are happening inside them that are non-trivial. Within my lab – within a single project – we will have results that simultaneously suggest that the model is actually quite clever and represents things in a way that we would call, you know, systematic and compositional and human-like in a certain way. And at the same time it is just doing something wild that just seems so simply dumb. Those things can kind of both be true. And I think that’s the kind of world we’re in right now that makes the science so messy and so interesting.

Join us

This Week

Recommended