A study shows that artificial intelligence can literally read your thoughts, and with exceptional accuracy

A study shows that artificial intelligence can literally read your thoughts, and with exceptional accuracy

Science fiction becomes dystopian reality.

Scientists from Osaka University (Japan) have found a way to use artificial intelligence (AI) to read our minds – literally.

By combining magnetic resonance imaging (MRI) and AI technology, the researchers were able to generate images directly from human brain activity. By using the widely used stable diffusion technology, they were able to do this at very low cost and with relatively little effort compared to other recent attempts. The researchers’ findings will be published in a pre-print study.

Although researchers had already found ways to convert neural imaging into visual representations, this typically required complex and expensive deep learning technology. A technology that needs to be “trained” and carefully calibrated. Nevertheless, the representations produced are often only sketchy and not true to detail.

The Japanese researchers’ new method takes advantage of a common technology that is widely used in generating images using voice prompts. For the past several months, social media platforms have been inundated with images created using Stable Diffusion and other similar platforms. Technology is capable of producing compelling, sometimes hyper-realistic, images with just a few carefully chosen words. This technology can be used to create static images or, with some modifications, animations in popular styles such as anime.

While some in the art world support this, many artists fear that this technology will replace them – and soon. Some have begun campaigning for this technology to be restricted or even banned.

Last September, The New York Times reported on the aftermath of this year’s Colorado State Art Fair competition, which was won by an entry – Théâtre D’opéra Spatial – made with Midjourney, another popular AI system.

“Art is dead, dude,” winner Jason M. Allen told the Times.


To generate their images, the Japanese researchers followed a two-step process. First, they decoded an image from the MRI signals of their test subjects. They then used the MRI signals to decode “latent text representations,” which could be fed like prompts into the stable diffusion platform to improve the quality of the images originally retrieved.

The results of this process can be seen in the series of images below.

In a paper published in Nature last year, an almost identical approach, albeit with a more specialized diffusion technology, was used to reconstruct AI-generated faces from MRI data.

Although the reconstructions created in the Nature study are clearly closer to the original depictions, what is remarkable about the new Japanese research is that it was conducted using a platform already used by millions of people around the world to create images becomes.

Stable Diffusion is a learning system – a so-called neural network – that trains itself on huge amounts of image data taken directly from the vast image archive on the Internet.

Its creator, Emad Mostaque, has called it a “generative search engine”. Unlike Google, which shows you images that already exist, Stable Diffusion can show you things that don’t exist.

“This is a Star Trek holodeck moment,” says Mostaque.


The disruptive power of AI has been shown very clearly in the past year. As a magazine editor, I can see the amazing potential of platforms like Stable Diffusion, which deliver high-quality art with almost no cost or hassle. If I want an image of President Biden’s iconic showdown with corn pop, perhaps in the style of a 1980s action movie, all I have to do is ask and an AI platform like Midjourney will provide it. What’s not to like about it?

At the same time, it is clear that such technology has very real disadvantages. As already mentioned, artists are afraid of losing commissions to AI.

Aside from AI-generated art, we’ve also been hearing a lot about Chat GPT-3 lately, a technology that can write complicated essays from scratch in seconds, again at a simple command prompt. This technology is already being used by university students to write high-quality essays that they can use to fool their examiners. The entire assessment system at universities may need to be reconsidered.

You don’t have to be a fan of the Terminator or Matrix movies, though, to realize that the troubling uses of AI extend far beyond cheating college exams or putting mediocre artists out of business.

The website Popular Mechnanics reported a few days ago that US Air Force drones can now use AI to recognize the faces of individuals from the air. The company that developed the technology, Seattle-based RealNetworks, says the technology allows the drones to distinguish friend from foe and the software can be used for rescue operations, patrols and “domestic searches.” An Israeli company is working on similar technology designed to help drones find the perfect angle for facial recognition.

How long will it be before AI-equipped drones make decisions that harm people or even cause death? Will AI drones be allowed to make life and death decisions without human intervention? Will they be used in domestic policing and surveillance – in what roles and with what powers? These are classic questions asked by futurists, science fiction writers, and ethicists, but it’s becoming increasingly clear that these aren’t hypotheses or thought experiments—we’re talking when, not if, and probably a lot earlier than you would like to think.

In Dubai, drones are already being used by the police to identify bad drivers. In the United States, police use drones for a variety of purposes, from assisting with search and rescue operations and training officers, to gathering information on suspects and monitoring public events. Local authorities and civil rights groups are already opposing the use of facial recognition technology. In 2021, Portland City Council passed one of the toughest bans on this technology in the US, perhaps in part because the city was an epicenter of rioting during the summer of “mostly peaceful” riots and beyond. Also in 2021, the New York Civil Rights Association released a report warning of the widespread use of drones for surveillance in the city and state of New York.

The malicious capabilities of AI mind-reading technology far surpass those of drones with facial recognition software. We’re not talking potential intrusions into our social privacy here – where we go and what we do – but intrusion into our minds – a mad scientist’s wet dream from MKUltra. At this point, it’s hard to imagine what applications the technology could really have, other than enhancing the blunt techniques of sensory deprivation, isolation, and scopolamine to trick people into revealing their innermost secrets.

As clumsy as the technology may seem now, the compelling leaps we’ve seen over the past year with technologies like GPT-3 and Stable Diffusion should leave us in no doubt as to the direction it’s headed. Technology will continue to become more powerful, more accurate and more practical, and that’s a guarantee.

We’ll soon find that our wildest dreams, but also our worst nightmares, could be more real than we ever dared to think. What’s even more frightening is that others might be able to see them too, whether we like it or not.

Podijeli članak: