Help Request SD.Next runtime error

I am trying to run the SD.Next for T2I function using SD 1.5 model and keep getting runtime error ‘Meta tensor error’; I have tried adding –backend original in commandline arg, that did not help. The error log is below. Any help in troubleshooting will be greatly appreciated.

NotImplementedError: Cannot copy out of meta tensor; no data!
19:44:10-203095 DEBUG Search model: name=“v1-5-pruned-fp16-emaonly [92954befdb]”
matched=“C:\SD_Next\sdnext\models\Stable-diffusion\v1-5-pruned-fp16-emaonly.safetensors”
type=alias
19:44:10-219740 DEBUG Analyzed: model=“v1-5-pruned-fp16-emaonly” type=sd class=StableDiffusionPipeline
size=2132650708 mtime=“2025-12-21 14:29:25” modules=[name=“vae” cls=AutoencoderKL config=True
device=meta dtype=torch.float16 params=83653863 modules=243, name=“text_encoder”
cls=CLIPTextModel config=True device=meta dtype=torch.float16 params=123060480 modules=152,
name=“tokenizer” cls=CLIPTokenizer config=False, name=“unet” cls=UNet2DConditionModel
config=True device=meta dtype=torch.float16 params=859520964 modules=709, name=“scheduler”
cls=EulerAncestralDiscreteScheduler config=True, name=“safety_checker” cls=NoneType
config=False, name=“feature_extractor” cls=CLIPImageProcessor config=False,
name=“image_encoder” cls=NoneType config=False, name=“requires_safety_checker” cls=bool
config=False]

1 Like

An error that often occurs when something goes wrong with offloading.


Your log shows SD.Next built a Diffusers pipeline, but all core modules are still on device=meta (VAE, UNet, text encoder). A meta tensor has no data, so when SD.Next tries to move it to CPU or GPU you get:

NotImplementedError: Cannot copy out of meta tensor; no data! (PyTorch Documentation)

That is not a “bad prompt” or “bad sampler” problem. It is a model-loading / offload / dispatch problem: the model was created as an “empty shell” and never got its weights fully materialized.


What “device=meta” means in plain terms

PyTorch’s meta device stores shapes and dtypes only. No memory. No values. You cannot do .to("cuda") or .to("cpu") from meta because there is nothing to copy. (PyTorch Documentation)

This pattern is used intentionally by Hugging Face Accelerate to avoid RAM spikes:

  • accelerate.init_empty_weights() creates an empty model on meta
  • you must then load weights with load_checkpoint_and_dispatch()
  • and you cannot do model.to(device) while it is still empty/meta (Hugging Face)

So your error means: SD.Next entered an “empty init / offload” path but did not successfully complete the “load+dispatch weights” step.


Why your --backend original attempt “did not help”

Your log says: class=StableDiffusionPipeline.

That is the Diffusers pipeline class. In SD.Next terms, that means you are on the Diffusers backend, not the Original backend. SD.Next explicitly documents that it supports two backends and that Diffusers is the default for new installs, while Original supports SD 1.x and SD 2.x (including SD 1.5). (GitHub)

So one of these is true:

  1. The flag you passed was not actually parsed (common on Windows if you pasted a Unicode dash like –backend instead of ASCII --backend), or
  2. You started SD.Next from a shortcut/bat that does not include your args, or
  3. A config setting forced Diffusers anyway.

The key point: your running instance is still Diffusers, as evidenced by the pipeline class. (GitHub)


The most likely causes for your case (ranked)

Cause 1: Offload mode is enabled and the offload path is failing, leaving modules on meta

SD.Next has multiple offload modes (none, balanced, sequential). It also notes defaults based on VRAM and that --lowvram triggers sequential offload. (GitHub)

And SD.Next has a known issue where balanced offload can throw exactly your meta-tensor error during startup. (GitHub)

Also, community guidance frequently ties this specific exception to “Accelerate auto-offloading because GPU memory is insufficient,” which aligns with the offload theme. (Stack Overflow)

What this looks like in logs: analysis shows modules on device=meta (exactly what you have).


Cause 2: Diffusers backend requires you to set the correct “Pipeline type” for safetensors models

SD.Next’s Diffusers wiki is explicit:

  • “When loading safetensors models, you must specify model pipeline type in: UI → Settings → Diffusers → Pipeline.” (GitHub)

If pipeline type is wrong (or left at an incompatible choice), SD.Next can partially construct the pipeline and then fail during weight mapping, leaving modules meta.


Cause 3: Version mismatch (SD.Next vs torch/diffusers/accelerate) in an offload-heavy codepath

SD.Next changelog includes fixes that are basically aimed at your symptom:

  • “Add additional pipeline types for manual model loads when loading from safetensors”
  • “Improve model offloading … when dealing with meta tensors” (GitHub)

If you are just slightly behind a fix, meta/offload failures can persist.


Cause 4: Stale configs override new defaults and keep you stuck in a broken state

SD.Next troubleshooting calls out two common “poisoned settings” files:

  • ui-config.json can become bloated and override new defaults with invalid older settings
  • config.json can be renamed to regenerate cleanly
  • they also recommend running with --debug to expose the real root cause (GitHub)

This matters because you can “change settings” and still have SD.Next apply old overrides at startup.


Cause 5: Device-map / offload hooks mixed with manual .to() behavior

Diffusers docs warn that if a pipeline was device-mapped, you may need to reset that before calling .to() or offload methods. (Hugging Face)

You are not calling .to() directly, but SD.Next (or its libraries) effectively does equivalent moves internally. A mixed “device map + move” lifecycle is a classic way to end up with meta leftovers.


Concrete fix plan for your exact log (do in order)

Step 0: Turn on debug so you can see the backend and offload choices

Run SD.Next with:

  • --debug (SD.Next recommends this for troubleshooting) (GitHub)

In the first ~50 lines, you want to find lines that clearly state:

  • backend = Diffusers or Original
  • offload mode chosen
  • lowvram/medvram flags

If it still says Diffusers, your --backend original was not applied.


Step 1: Fix the command-line dash problem (common Windows trap)

Make sure you are using ASCII double-hyphen:

  • :white_check_mark: --backend original
  • :cross_mark: –backend original (Unicode en-dash)

Because your log still shows StableDiffusionPipeline, you are still on Diffusers. (GitHub)

Also confirm you are launching the same webui.bat you edited.


Step 2: Disable offload completely (best single test for meta issues)

In UI:

Settings → Models & Loading → Model offload mode
Set to None/Disabled (wording varies).

Why: SD.Next offload modes are explicitly tied to sequential/balanced behavior and can be automatically triggered by --lowvram. (GitHub)

Also check you are not using:

  • --lowvram
  • --medvram

Those flags explicitly split/offload model parts. (GitHub)
And the offload wiki states --lowvram triggers sequential offload. (GitHub)

If disabling offload fixes it, you have your root cause: an offload/dispatch bug or version mismatch in that path.


Step 3: If you stay on Diffusers backend, set the correct pipeline type for SD 1.5 safetensors

Go to:

Settings → Diffusers → Pipeline

Then select the pipeline appropriate for SD 1.5 txt2img.

This is not optional for safetensors in SD.Next Diffusers mode. The wiki states you “must specify” it. (GitHub)

Reload the model after changing it.


Step 4: Update SD.Next to a build that includes meta/offload improvements

Because the changelog explicitly mentions improving offloading when dealing with meta tensors and adding pipeline types for safetensors. (GitHub)

Use the SD.Next FAQ recovery ladder:

  1. webui.bat --upgrade
  2. if still broken: webui.bat --reinstall
  3. if still broken: webui.bat --reset (GitHub)

This is the cleanest way to fix a broken torch/diffusers/accelerate mix without manual pip surgery.


Step 5: Reset config files that commonly keep bad settings alive

SD.Next troubleshooting guidance:

  • If ui-config.json is large, delete it (older versions saved too much and can override defaults incorrectly)
  • Rename config.json to config-backup.json so SD.Next regenerates a fresh config
  • Run with --debug (GitHub)

This step is high leverage when “I changed settings but nothing changed.”


What I think is happening in your run (most probable)

Given your exact evidence:

  • Diffusers pipeline class
  • All major modules device=meta

The highest-probability explanation is:

  1. SD.Next Diffusers backend started loading the model via an “empty init” approach (meta) for memory reasons (offload/device-map path). (Hugging Face)
  2. The follow-up dispatch step failed or was never reached due to a mismatch in offload mode, pipeline type, or package versions. (GitHub)
  3. Something then tried to move the pipeline to GPU, triggering the meta copy exception. (PyTorch Documentation)

This matches known SD.Next offload failures (balanced offload issue) (GitHub) and known Diffusers single-file workflow failures that crash at pipe.to("cuda") with the same exception. (GitHub)


What to paste if you want a precise call on which knob is the culprit

Paste these sections (text only):

  1. First ~60 lines of startup with --debug (shows backend, torch, device, offload decisions) (GitHub)
  2. The lines immediately before the NotImplementedError (the part where it tries to load/dispatch weights)

With that, it’s usually obvious whether it’s:

  • offload mode selection,
  • wrong Diffusers pipeline type,
  • or dependency drift.

Reference set (good “pitfalls and fixes” pages)


Summary

  • Your modules are on device=meta, meaning weights are not materialized, and PyTorch refuses to copy meta tensors. (PyTorch Documentation)
  • Your run is effectively Diffusers backend, despite your attempt to force Original, because the pipeline class is StableDiffusionPipeline. (GitHub)
  • The most likely fixes are: disable offload, set the correct Diffusers pipeline type for safetensors, upgrade/reinstall/reset, and regenerate config files. (GitHub)