Adobe’s Oriol Nieto loaded up a short video with a handful of scenes and a voice-over, but no sound effects. The AI model analyzed the video and broke it down into scenes, applying emotional tags and a description of each scene. Then, the sound effects came. The AI model picked up on a scene with an alarm clock, for instance, and automatically created a sound effect. It identified a scene where the main character (an octopus, in this case) was driving a car, and it added a sound effect of a door…








