Researcher's have created Mind-Video that can make videos from the brain activity of what we've seen. Here are some results:
They used an fMRI brain scan to learn how our brains process what we see. Before this, they could only make pictures, not videos.
They faced some problems though. The brain takes time to react to things, so it was hard to keep up with what was happening in real-time. Also, their previous work didn't give enough guidance, so the pictures weren't very accurate. They had to find a way to make videos that were consistent and still looked real.
They came up with a two-step plan. First, they trained their model to understand brain signals and what they mean. They created a new learning method and paid attention to different parts of the brain. Then, in the second step, they made the model better at making videos using the brain data.
The images on the left below are clips of what was seen, and the videos on on the right are videos made by the technology from brain activity.
The researchers also found out interesting things about the brain. The visual cortex is known to be important for seeing things and understanding how they move over time. They also discovered that two networks of the brain (the dorsal attention network and the default mode network) work together with the visual cortex to help us make sense of what we see.
I wonder if sound and emotions related to what we've seen can be reconstructed?