Top 5 This Week

Related Posts

AI news recap: While Hollywood strikes, is ChatGPT getting worse?

Hollywood actors strike over use of AI in films and other issues

Artificial intelligence can now create images, novels and source code from scratch. Except it isn’t really from scratch, because a vast amount of human-generated examples are needed to train these AI models – something that has angered artists, programmers and writers and led to a series of lawsuits.

Hollywood actors are the latest group of creatives to turn against AI. They fear that film studios could take control of their likeness and have them “star” in films without ever being on set, perhaps taking on roles they would rather avoid and uttering lines or acting out scenes they would find distasteful. Worse still, they might not get paid for it.

That is why the Screen Actors Guild and the American Federation of Television and Radio Artists (SAG-AFTRA) – which has 160,000 members – is on strike until it can negotiate AI rights with the studios.

At the same time, Netflix has come under fire from actors for a job listing for people with experience in AI, paying a salary up to $900,000.

AIs trained on AI-generated images produce glitches and blurs

Speaking of training data, we wrote last year that the proliferation of AI-generated images could be a problem if they ended up online in great numbers, as new AI models would hoover them up to train on. Experts warned that the end result would be worsening quality. At the risk of making an outdated reference, AI would slowly destroy itself, like a degraded photocopy of a photocopy of a photocopy.

Well, fast-forward a year and that seems to be precisely what is happening, leading another group of researchers to make the same warning. A team at Rice University in Texas found evidence that AI-generated images making their way into training data in large numbers slowly distorted the output. But there is hope: the researchers discovered that if the amount of those images was kept below a certain level, then this degradation could be staved off.

Is ChatGPT getting worse at maths problems?

Corrupted training data is just one way that AI can start to fall apart. One study this month claimed that ChatGPT was getting worse at mathematics problems. When asked to check if 500 numbers were prime, the version of GPT-4 released in March scored 98 per cent accuracy, but a version released in June scored just 2.4 per cent. Strangely, by comparison, GPT-3.5’s accuracy seemed to jump from just 7.4 per cent in March to almost 87 per cent in June.

Arvind Narayanan at Princeton University, who in another study found other changing performance levels, puts the problem down to “an unintended side effect of fine-tuning”. Basically, the creators of these models are tweaking them to make the outputs more reliable, accurate or – potentially – less computationally intensive in order to cut costs. And although this may improve some things, other tasks might suffer. The upshot is that, while AI might do something well now, a future version might perform significantly worse, and it may not be obvious why.

Using bigger AI training data sets may produce more racist results

It is an open secret that a lot of the advances in AI in recent years have simply come from scale: larger models, more training data and more computer power. This has made AIs expensive, unwieldy and hungry for resources, but has also made them far more capable.

Certainly, there is a lot of research going on to shrink AI models and make them more efficient, as well as work on more graceful methods to advance the field. But scale has been a big part of the game.

Now though, there is evidence that this could have serious downsides, including making models even more racist. Researchers ran experiments on two open-source data sets: one contained 400 million samples and the other had 2 billion. They found that models trained on the larger data set were more than twice as likely to associate Black female faces with a “criminal” category and five times more likely to associate Black male faces with being “criminal”.

Drones with AI targeting system claimed to be ‘better than human’

Earlier this year we covered the strange tale of the AI-powered drone that “killed” its operator to get to its intended target, which was complete nonsense. The story was quickly denied by the US Air Force, which did little to stop it being reported around the world regardless.

Now, we have fresh claims that AI models can do a better job of identifying targets than humans – although the details are too secretive to reveal, and therefore verify.

“It can check whether people are wearing a particular type of uniform, if they are carrying weapons and whether they are giving signs of surrendering,” says a spokesperson for the company behind the software. Let’s hope they are right and that AI can make a better job of waging war than it can identifying prime numbers.

If you enjoyed this AI news recap, try our special series where we explore the most pressing questions about artificial intelligence. Find them all here:

How does ChatGPT work? | What generative AI really means for the economy | The real risks posed by AI | How to use AI to make your life simpler | The scientific challenges AI is helping to crack | Can AI ever become conscious?

Popular Articles