5.9 C
New York
Sunday, February 5, 2023

Buy now

AI is improving faster than most people realize

Technologies like ChatGPT will revolutionize the economy, but they are improving so fast it’s impossible to say how.

Artificial intelligence is developing in a way that is difficult for the human mind to comprehend. For a long time nothing happens, and then suddenly something happens. The current revolution of Large Language Models (LLMs) such as ChatGPT is the result of the advent of ‘transformer neural networks’ around 2017.

What will the next half decade bring? Can we rely on our current impressions of these tools to judge their quality, or will they surprise us with their development? As someone who has spent many hours playing with these models, I think many people will be shocked. LLMs will have significant implications for our business decisions, our portfolios, our regulatory structures and the simple question of how much we as individuals should invest in learning how to use them.

To be clear, I’m not an AI sensationalist. I don’t think it will lead to mass unemployment, let alone the “Skynet goes live” scenario and the ensuing destruction of the world. I really think it will prove to be a lasting competitive and learning advantage for the people and institutions that can use it.

I have a story for you, about chess and a neural network project called AlphaZero at DeepMind. AlphaZero was founded at the end of 2017. Almost immediately, it began training by playing hundreds of millions of chess games against itself. After about four hours, it was the best chess game ever made. The lesson of this story: given the right conditions, AI can improve very, very quickly.

LLMs cannot keep up with that pace because they deal with more open and complex systems, and they also require continued business investment. Still, the recent progress is impressive.

I was not impressed with GPT-2, a 2019 LLM. I was intrigued by GPT-3 (2020) and am very impressed with ChatGPT, which is sometimes referred to as GPT-3.5 and was released late last year. GPT-4 is on its way, possibly in the first half of this year. In just a few short years, these models have gone from curios to an integral part of the work routines of many people I know. This semester I will be teaching my students how to write a paper using LLMs.

ChatGPT, the model released late last year, received a grade D on an undergraduate labor economics exam given by my colleague Bryan Caplan. Anthropic, a new LLM available in beta and expected to be released this year, passed our graduate-level law and economics exam with nice, clear answers. (If you’re wondering, blind grading was used.) Granted, current results from LLMs aren’t always impressive. But keep these examples – and AlphaZero’s – in mind.

I don’t have a forecast for the rate of improvement, but most analogies to normal economics don’t apply. Cars get better every year by a modest amount, as do most other things I buy or use. LLMs, on the other hand, can take leaps and bounds.

Still, you may be wondering, “What can LLMs do for me?” I have two direct responses.

First, they can write software code. They make a lot of mistakes, but it’s often easier to edit and correct those mistakes than to write the code from scratch. They are also usually most useful when writing the tedious parts of code, freeing up talented human programmers for experimentation and innovation.

Second, they can be teachers. Such LLMs already exist and they will get much better soon. They can provide very interesting answers to questions about almost anything in the human or natural world. They are not always reliable, but they are often useful for new ideas and inspirations, not fact-checking. I expect they will be integrated with fact-checking and search services soon enough. In the meantime, they can improve writing and organize notes.

I’ve started dividing the people I know into three camps: those who are new to LLMs; those who complain about their current LLMs; and those who have any inkling of the surprising future ahead. The intriguing thing about LLMs is that they don’t follow smooth, continuous development rules. Rather, they are like a larva growing into a butterfly.

It is only human, if I may use that word, to be concerned about this future. But we also have to be ready.

Tyler Cowen is a Bloomberg Opinion columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. He is co-author of “Talent: How to Identify Energizers, Creatives and Winners Around the World.”

Source link

Related Articles


Please enter your comment!
Please enter your name here

Stay Connected


Latest Articles