Takeaways from Stanford’s 386-page report on the state of AI

Writing a report on the state of AI must be a lot like building on quicksand: the moment you hit publish, the whole industry has changed under your feet. But there are still important trends and takeaways in Stanford’s 386-page offering to sum up this complex and rapidly changing field.

The AI ​​Index, from the Institute for Human-Centered Artificial Intelligence, worked with experts from academia and private industry to gather insights and predictions on the matter. As an annual effort (and by its size, you can bet they’re already hard at work preparing for the next one) it might not be the newest version of AI, but those massive periodic surveys are important to keeping the finger on the pulse of the industry.

This year’s report includes “new analysis on core models, including their geopolitics and training costs, environmental impact of AI systems, K-12 AI education and trends in public opinion on AI”, as well as an overview of policy in a hundred new countries.

For the higher level takeaways, let us just summarize them here:

  • AI development has shifted over the past decade from an academic to an industrial direction, by a wide margin, and it shows no signs of changing.
  • It becomes difficult to test models on traditional benchmarks and a new paradigm may be needed here.
  • The energy footprint of training and using AI becomes considerable, but we have yet to see how this could improve efficiency elsewhere.
  • The number of “AI-related incidents and controversies” has increased 26 times since 2012, which seems a bit low after all.
  • AI-related skills and job openings are growing, but not as fast as you might think.
  • Policymakers, however, are collapsing trying to draft a definitive AI bill, a silly race if ever there was one.
  • Investment has temporarily stalled, but that’s after an astronomical increase over the past decade.
  • More than 70% of Chinese, Saudi Arabian and Indian respondents felt that AI had more advantages than disadvantages. Americans? 35%.

But the report details many topics and sub-topics, and is quite readable and non-technical. Only those who are dedicated will read the nearly 300 pages of analysis, but in reality just about any motivated organization could do so.

Let’s look at Chapter 3, Technical Ethics of AI, in a bit more detail.

Bias and toxicity are hard to reduce to metrics, but to the extent that we can define and test models for these things, it’s clear that “unfiltered” models are much, much easier to steer into territory. problem. Tuning instructions, i.e. adding an extra layer of preparation (like a hidden prompt) or passing the model output through a second mediator model, is effective in ameliorating this problem. , but far from perfect.

The increase in “AI incidents and controversies” alluded to in the bullet points is best illustrated by this diagram:

Picture credits: Stanford IHA

As you can see, the trend is up and these numbers predate the widespread adoption of ChatGPT and other major language models, not to mention the great improvement in image generators. You can be sure that the 26x increase is just the start.

Making models more fair or unbiased in one way can have unintended consequences on other metrics, as this diagram shows:

Picture credits: Stanford IHA

As the report notes, “Language models that perform better on certain fairness criteria tend to have worse gender biases.” For what? It’s hard to say, but it just goes to show that optimization isn’t as straightforward as one would hope. There’s no easy way to improve these great models, partly because we don’t really understand how they work.

Fact-checking is one such area that seems like a natural fit for AI: after indexing much of the web, it can assess statements and return confidence that they are backed by truthful sources, etc It is very far from being the case. AI is actually particularly bad at assessing factuality and the risk is not so much that it is not a reliable checker, but that it itself becomes a potent source of compelling misinformation. A number of studies and datasets have been created to test and improve AI fact-checking, but so far we are still more or less where we started.

Luckily, there’s been a surge in interest here, for the obvious reason that if people feel they can’t trust AI, the whole industry kicks back. There has been a huge increase in submissions to the ACM conference on fairness, accountability and transparency, and at NeurIPS issues such as fairness, confidentiality and interpretability are receiving more attention and time. of scene.

These highlight highlights leave a lot of detail on the table. The HAI team has done a great job of organizing the content, however, and after going through the high-level stuff here, you can download the full document and dig deeper into any topic that piques your interest.

Leave a Reply

Your email address will not be published. Required fields are marked *