Google’s new video generation AI model Lumiere uses a new diffusion model called Space-Time-U-Net, or STUNet, that figures out where things are in a video (space) and how they simultaneously move and change (time). Ars Technica reports this method lets Lumiere create the video in one process instead of putting smaller still frames together.

Lumiere starts with creating a base frame from the prompt. Then, it uses the STUNet framework to begin approximating where objects within that frame will move to create more frames that flow into each other, creating the appearance of seamless motion. Lumiere also generates 80 frames compared to 25 frames from Stable Video Diffusion.

Beyond text-to-video generation, Lumiere will also allow for image-to-video generation, stylized generation, which lets users make videos in a specific style, cinemagraphs that animate only a portion of a video, and inpainting to mask out an area of the video to change the color or pattern.

Google’s Lumiere paper, though, noted that “there is a risk of misuse for creating fake or harmful content with our technology, and we believe that it is crucial to develop and apply tools for detecting biases and malicious use cases to ensure a safe and fair use.” The paper’s authors didn’t explain how this can be achieved.

Synopsis excerpted from The Verge article.

  • AtmaJnana@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    10 months ago

    Having used diffusion a bit for static images, I can only look forward to the eldrich horrors it will inevitably create.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      10 months ago

      It’s still driving the state of the art forward, which will result in models that will be used by the public.

      • peopleproblems@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Right? Once the model and training methods are published in some journal, the only barrier becomes the hardware to use it.

        Which, given like stable diffusion etc, is really a matter of VRAM. Have enough of that, and this should be possible

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          Indeed. Often the hardest part of an invention is the discovery that a thing is actually possible. Even if nobody knows how it was done they can now justify throwing resources into figuring it out and know what results to keep an eye out for.

    • WHYAREWEALLCAPS@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      It’s almost like most of the time in history cutting edge tech tended to be unusable by the public until it matured enough to get businesses interested. Then they’d invest in a usability layer that was unimportant to the cutting edge research.