# ConvNeXT

## Overview

ConvNeXT モデルは、[A ConvNet for the 2020s](https://huggingface.co/papers/2201.03545) で Zhuang Liu、Hanzi Mao、Chao-Yuan Wu、Christoph Feichtenhofer、Trevor Darrell、Saining Xie によって提案されました。
ConvNeXT は、ビジョン トランスフォーマーの設計からインスピレーションを得た純粋な畳み込みモデル (ConvNet) であり、ビジョン トランスフォーマーよりも優れたパフォーマンスを発揮すると主張しています。

論文の要約は次のとおりです。

*視覚認識の「狂騒の 20 年代」は、最先端の画像分類モデルとして ConvNet にすぐに取って代わられた Vision Transformers (ViT) の導入から始まりました。
一方、バニラ ViT は、オブジェクト検出やセマンティック セグメンテーションなどの一般的なコンピューター ビジョン タスクに適用すると困難に直面します。階層型トランスフォーマーです
(Swin Transformers など) は、いくつかの ConvNet の以前の機能を再導入し、Transformers を汎用ビジョン バックボーンとして実用的に可能にし、幅広い環境で顕著なパフォーマンスを実証しました。
さまざまな視覚タスク。ただし、このようなハイブリッド アプローチの有効性は、依然として、固有の誘導性ではなく、トランスフォーマーの本質的な優位性によるところが大きいと考えられています。
畳み込みのバイアス。この作業では、設計空間を再検討し、純粋な ConvNet が達成できる限界をテストします。標準 ResNet を設計に向けて徐々に「最新化」します。
ビジョン Transformer の概要を確認し、途中でパフォーマンスの違いに寄与するいくつかの重要なコンポーネントを発見します。この調査の結果は、純粋な ConvNet モデルのファミリーです。
ConvNextと呼ばれます。 ConvNeXts は完全に標準の ConvNet モジュールから構築されており、精度と拡張性の点で Transformers と有利に競合し、87.8% の ImageNet トップ 1 精度を達成しています。
標準 ConvNet のシンプルさと効率を維持しながら、COCO 検出と ADE20K セグメンテーションでは Swin Transformers よりも優れたパフォーマンスを発揮します。*

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.jpg"
alt="描画" width="600"/>

<small> ConvNeXT アーキテクチャ。 <a href="https://huggingface.co/papers/2201.03545">元の論文</a>から抜粋。</small>

このモデルは、[nielsr](https://huggingface.co/nielsr) によって提供されました。 TensorFlow バージョンのモデルは [ariG23498](https://github.com/ariG23498) によって提供されました。
[gante](https://github.com/gante)、および [sayakpaul](https://github.com/sayakpaul) (同等の貢献)。元のコードは [こちら](https://github.com/facebookresearch/ConvNeXt) にあります。

## Resources

ConvNeXT の使用を開始するのに役立つ公式 Hugging Face およびコミュニティ (🌎 で示される) リソースのリスト。

<PipelineTag pipeline="image-classification"/>

- [ConvNextForImageClassification](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextForImageClassification) は、この [サンプル スクリプト](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) および [ノートブック](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)。
- 参照: [画像分類タスク ガイド](../tasks/image_classification)

ここに含めるリソースの送信に興味がある場合は、お気軽にプル リクエストを開いてください。審査させていただきます。リソースは、既存のリソースを複製するのではなく、何か新しいものを示すことが理想的です。

## ConvNextConfig[[transformers.ConvNextConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class transformers.ConvNextConfig</name><anchor>transformers.ConvNextConfig</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/configuration_convnext.py#L31</source><parameters>[{"name": "num_channels", "val": " = 3"}, {"name": "patch_size", "val": " = 4"}, {"name": "num_stages", "val": " = 4"}, {"name": "hidden_sizes", "val": " = None"}, {"name": "depths", "val": " = None"}, {"name": "hidden_act", "val": " = 'gelu'"}, {"name": "initializer_range", "val": " = 0.02"}, {"name": "layer_norm_eps", "val": " = 1e-12"}, {"name": "layer_scale_init_value", "val": " = 1e-06"}, {"name": "drop_path_rate", "val": " = 0.0"}, {"name": "image_size", "val": " = 224"}, {"name": "out_features", "val": " = None"}, {"name": "out_indices", "val": " = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **num_channels** (`int`, *optional*, defaults to 3) --
  The number of input channels.
- **patch_size** (`int`, *optional*, defaults to 4) --
  Patch size to use in the patch embedding layer.
- **num_stages** (`int`, *optional*, defaults to 4) --
  The number of stages in the model.
- **hidden_sizes** (`list[int]`, *optional*, defaults to [96, 192, 384, 768]) --
  Dimensionality (hidden size) at each stage.
- **depths** (`list[int]`, *optional*, defaults to [3, 3, 9, 3]) --
  Depth (number of blocks) for each stage.
- **hidden_act** (`str` or `function`, *optional*, defaults to `"gelu"`) --
  The non-linear activation function (function or string) in each block. If string, `"gelu"`, `"relu"`,
  `"selu"` and `"gelu_new"` are supported.
- **initializer_range** (`float`, *optional*, defaults to 0.02) --
  The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- **layer_norm_eps** (`float`, *optional*, defaults to 1e-12) --
  The epsilon used by the layer normalization layers.
- **layer_scale_init_value** (`float`, *optional*, defaults to 1e-6) --
  The initial value for the layer scale.
- **drop_path_rate** (`float`, *optional*, defaults to 0.0) --
  The drop rate for stochastic depth.
- **out_features** (`list[str]`, *optional*) --
  If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc.
  (depending on how many stages the model has). If unset and `out_indices` is set, will default to the
  corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the
  same order as defined in the `stage_names` attribute.
- **out_indices** (`list[int]`, *optional*) --
  If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
  many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
  If unset and `out_features` is unset, will default to the last stage. Must be in the
  same order as defined in the `stage_names` attribute.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [ConvNextModel](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextModel). It is used to instantiate an
ConvNeXT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the ConvNeXT
[facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) architecture.

Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.57.0/ja/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the
documentation from [PretrainedConfig](/docs/transformers/v4.57.0/ja/main_classes/configuration#transformers.PretrainedConfig) for more information.



<ExampleCodeBlock anchor="transformers.ConvNextConfig.example">

Example:
```python
>>> from transformers import ConvNextConfig, ConvNextModel

>>> # Initializing a ConvNext convnext-tiny-224 style configuration
>>> configuration = ConvNextConfig()

>>> # Initializing a model (with random weights) from the convnext-tiny-224 style configuration
>>> model = ConvNextModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config
```

</ExampleCodeBlock>

</div>

## ConvNextFeatureExtractor[[transformers.ConvNextFeatureExtractor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class transformers.ConvNextFeatureExtractor</name><anchor>transformers.ConvNextFeatureExtractor</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/feature_extraction_convnext.py#L28</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## ConvNextImageProcessor[[transformers.ConvNextImageProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class transformers.ConvNextImageProcessor</name><anchor>transformers.ConvNextImageProcessor</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/image_processing_convnext.py#L53</source><parameters>[{"name": "do_resize", "val": ": bool = True"}, {"name": "size", "val": ": typing.Optional[dict[str, int]] = None"}, {"name": "crop_pct", "val": ": typing.Optional[float] = None"}, {"name": "resample", "val": ": Resampling = <Resampling.BILINEAR: 2>"}, {"name": "do_rescale", "val": ": bool = True"}, {"name": "rescale_factor", "val": ": typing.Union[int, float] = 0.00392156862745098"}, {"name": "do_normalize", "val": ": bool = True"}, {"name": "image_mean", "val": ": typing.Union[float, list[float], NoneType] = None"}, {"name": "image_std", "val": ": typing.Union[float, list[float], NoneType] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **do_resize** (`bool`, *optional*, defaults to `True`) --
  Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden
  by `do_resize` in the `preprocess` method.
- **size** (`dict[str, int]` *optional*, defaults to `{"shortest_edge" -- 384}`):
  Resolution of the output image after `resize` is applied. If `size["shortest_edge"]` >= 384, the image is
  resized to `(size["shortest_edge"], size["shortest_edge"])`. Otherwise, the smaller edge of the image will
  be matched to `int(size["shortest_edge"]/crop_pct)`, after which the image is cropped to
  `(size["shortest_edge"], size["shortest_edge"])`. Only has an effect if `do_resize` is set to `True`. Can
  be overridden by `size` in the `preprocess` method.
- **crop_pct** (`float` *optional*, defaults to 224 / 256) --
  Percentage of the image to crop. Only has an effect if `do_resize` is `True` and size < 384. Can be
  overridden by `crop_pct` in the `preprocess` method.
- **resample** (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`) --
  Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method.
- **do_rescale** (`bool`, *optional*, defaults to `True`) --
  Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in
  the `preprocess` method.
- **rescale_factor** (`int` or `float`, *optional*, defaults to `1/255`) --
  Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess`
  method.
- **do_normalize** (`bool`, *optional*, defaults to `True`) --
  Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
  method.
- **image_mean** (`float` or `list[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`) --
  Mean to use if normalizing the image. This is a float or list of floats the length of the number of
  channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
- **image_std** (`float` or `list[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`) --
  Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
  number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Constructs a ConvNeXT image processor.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>preprocess</name><anchor>transformers.ConvNextImageProcessor.preprocess</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/image_processing_convnext.py#L188</source><parameters>[{"name": "images", "val": ": typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']]"}, {"name": "do_resize", "val": ": typing.Optional[bool] = None"}, {"name": "size", "val": ": typing.Optional[dict[str, int]] = None"}, {"name": "crop_pct", "val": ": typing.Optional[float] = None"}, {"name": "resample", "val": ": typing.Optional[PIL.Image.Resampling] = None"}, {"name": "do_rescale", "val": ": typing.Optional[bool] = None"}, {"name": "rescale_factor", "val": ": typing.Optional[float] = None"}, {"name": "do_normalize", "val": ": typing.Optional[bool] = None"}, {"name": "image_mean", "val": ": typing.Union[float, list[float], NoneType] = None"}, {"name": "image_std", "val": ": typing.Union[float, list[float], NoneType] = None"}, {"name": "return_tensors", "val": ": typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None"}, {"name": "data_format", "val": ": ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>"}, {"name": "input_data_format", "val": ": typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None"}]</parameters><paramsdesc>- **images** (`ImageInput`) --
  Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
  passing in images with pixel values between 0 and 1, set `do_rescale=False`.
- **do_resize** (`bool`, *optional*, defaults to `self.do_resize`) --
  Whether to resize the image.
- **size** (`dict[str, int]`, *optional*, defaults to `self.size`) --
  Size of the output image after `resize` has been applied. If `size["shortest_edge"]` >= 384, the image
  is resized to `(size["shortest_edge"], size["shortest_edge"])`. Otherwise, the smaller edge of the
  image will be matched to `int(size["shortest_edge"]/ crop_pct)`, after which the image is cropped to
  `(size["shortest_edge"], size["shortest_edge"])`. Only has an effect if `do_resize` is set to `True`.
- **crop_pct** (`float`, *optional*, defaults to `self.crop_pct`) --
  Percentage of the image to crop if size < 384.
- **resample** (`int`, *optional*, defaults to `self.resample`) --
  Resampling filter to use if resizing the image. This can be one of `PILImageResampling`, filters. Only
  has an effect if `do_resize` is set to `True`.
- **do_rescale** (`bool`, *optional*, defaults to `self.do_rescale`) --
  Whether to rescale the image values between [0 - 1].
- **rescale_factor** (`float`, *optional*, defaults to `self.rescale_factor`) --
  Rescale factor to rescale the image by if `do_rescale` is set to `True`.
- **do_normalize** (`bool`, *optional*, defaults to `self.do_normalize`) --
  Whether to normalize the image.
- **image_mean** (`float` or `list[float]`, *optional*, defaults to `self.image_mean`) --
  Image mean.
- **image_std** (`float` or `list[float]`, *optional*, defaults to `self.image_std`) --
  Image standard deviation.
- **return_tensors** (`str` or `TensorType`, *optional*) --
  The type of tensors to return. Can be one of:
  - Unset: Return a list of `np.ndarray`.
  - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
  - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
  - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
  - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
- **data_format** (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`) --
  The channel dimension format for the output image. Can be one of:
  - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
  - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
  - Unset: Use the channel dimension format of the input image.
- **input_data_format** (`ChannelDimension` or `str`, *optional*) --
  The channel dimension format for the input image. If unset, the channel dimension format is inferred
  from the input image. Can be one of:
  - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
  - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
  - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.</paramsdesc><paramgroups>0</paramgroups></docstring>

Preprocess an image or batch of images.




</div></div>

## ConvNextImageProcessorFast[[transformers.ConvNextImageProcessorFast]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class transformers.ConvNextImageProcessorFast</name><anchor>transformers.ConvNextImageProcessorFast</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/image_processing_convnext_fast.py#L55</source><parameters>[{"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.models.convnext.image_processing_convnext_fast.ConvNextFastImageProcessorKwargs]"}]</parameters></docstring>

Constructs a fast Convnext image processor.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>preprocess</name><anchor>transformers.ConvNextImageProcessorFast.preprocess</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/image_processing_convnext_fast.py#L70</source><parameters>[{"name": "images", "val": ": typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']]"}, {"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.models.convnext.image_processing_convnext_fast.ConvNextFastImageProcessorKwargs]"}]</parameters><paramsdesc>- **images** (`Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']]`) --
  Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
  passing in images with pixel values between 0 and 1, set `do_rescale=False`.
- **do_resize** (`bool`, *optional*) --
  Whether to resize the image.
- **size** (`dict[str, int]`, *optional*) --
  Describes the maximum input dimensions to the model.
- **default_to_square** (`bool`, *optional*) --
  Whether to default to a square image when resizing, if size is an int.
- **resample** (`Union[PILImageResampling, F.InterpolationMode, NoneType]`) --
  Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only
  has an effect if `do_resize` is set to `True`.
- **do_center_crop** (`bool`, *optional*) --
  Whether to center crop the image.
- **crop_size** (`dict[str, int]`, *optional*) --
  Size of the output image after applying `center_crop`.
- **do_rescale** (`bool`, *optional*) --
  Whether to rescale the image.
- **rescale_factor** (`Union[int, float, NoneType]`) --
  Rescale factor to rescale the image by if `do_rescale` is set to `True`.
- **do_normalize** (`bool`, *optional*) --
  Whether to normalize the image.
- **image_mean** (`Union[float, list[float], NoneType]`) --
  Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`.
- **image_std** (`Union[float, list[float], NoneType]`) --
  Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to
  `True`.
- **do_pad** (`bool`, *optional*) --
  Whether to pad the image. Padding is done either to the largest size in the batch
  or to a fixed square size per image. The exact padding strategy depends on the model.
- **pad_size** (`dict[str, int]`, *optional*) --
  The size in `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
  provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest
  height and width in the batch. Applied only when `do_pad=True.`
- **do_convert_rgb** (`bool`, *optional*) --
  Whether to convert the image to RGB.
- **return_tensors** (`Union[str, ~utils.generic.TensorType, NoneType]`) --
  Returns stacked tensors if set to `pt, otherwise returns a list of tensors.
- **data_format** (`~image_utils.ChannelDimension`, *optional*) --
  Only `ChannelDimension.FIRST` is supported. Added for compatibility with slow processors.
- **input_data_format** (`Union[str, ~image_utils.ChannelDimension, NoneType]`) --
  The channel dimension format for the input image. If unset, the channel dimension format is inferred
  from the input image. Can be one of:
  - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
  - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
  - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
- **device** (`torch.device`, *optional*) --
  The device to process the images on. If unset, the device is inferred from the input images.
- **disable_grouping** (`bool`, *optional*) --
  Whether to disable grouping of images by size to process them individually and not in batches.
  If None, will be set to True if the images are on CPU, and False otherwise. This choice is based on
  empirical observations, as detailed here: https://github.com/huggingface/transformers/pull/38157
- **crop_pct** (`float`, *optional*) --
  Percentage of the image to crop. Only has an effect if size < 384. Can be
  overridden by `crop_pct` in the`preprocess` method.</paramsdesc><paramgroups>0</paramgroups><rettype>`<class 'transformers.image_processing_base.BatchFeature'>`</rettype><retdesc>- **data** (`dict`) -- Dictionary of lists/arrays/tensors returned by the __call__ method ('pixel_values', etc.).
- **tensor_type** (`Union[None, str, TensorType]`, *optional*) -- You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at
  initialization.</retdesc></docstring>







</div></div>

<frameworkcontent>
<pt>

## ConvNextModel[[transformers.ConvNextModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class transformers.ConvNextModel</name><anchor>transformers.ConvNextModel</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/modeling_convnext.py#L264</source><parameters>[{"name": "config", "val": ""}]</parameters><paramsdesc>- **config** ([ConvNextModel](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextModel)) --
  Model configuration class with all the parameters of the model. Initializing with a config file does not
  load the weights associated with the model, only the configuration. Check out the
  [from_pretrained()](/docs/transformers/v4.57.0/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.</paramsdesc><paramgroups>0</paramgroups></docstring>

The bare Convnext Model outputting raw hidden-states without any specific head on top.

This model inherits from [PreTrainedModel](/docs/transformers/v4.57.0/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)

This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>transformers.ConvNextModel.forward</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/modeling_convnext.py#L278</source><parameters>[{"name": "pixel_values", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_hidden_states", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **pixel_values** (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`, *optional*) --
  The tensors corresponding to the input images. Pixel values can be obtained using
  [ConvNextImageProcessor](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextImageProcessor). See [ConvNextImageProcessor.__call__()](/docs/transformers/v4.57.0/ja/model_doc/detr#transformers.DetrFeatureExtractor.__call__) for details (`processor_class` uses
  [ConvNextImageProcessor](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextImageProcessor) for processing images).
- **output_hidden_states** (`bool`, *optional*) --
  Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
  more detail.</paramsdesc><paramgroups>0</paramgroups><rettype>`transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention` or `tuple(torch.FloatTensor)`</rettype><retdesc>A `transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention` or a tuple of
`torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various
elements depending on the configuration ([ConvNextConfig](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextConfig)) and inputs.

- **last_hidden_state** (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`) -- Sequence of hidden-states at the output of the last layer of the model.
- **pooler_output** (`torch.FloatTensor` of shape `(batch_size, hidden_size)`) -- Last layer hidden-state after a pooling operation on the spatial dimensions.
- **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
  one for the output of each layer) of shape `(batch_size, num_channels, height, width)`.

  Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</retdesc></docstring>
The [ConvNextModel](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextModel) forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>







<ExampleCodeBlock anchor="transformers.ConvNextModel.forward.example">

Example:

```python
```

</ExampleCodeBlock>


</div></div>

## ConvNextForImageClassification[[transformers.ConvNextForImageClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class transformers.ConvNextForImageClassification</name><anchor>transformers.ConvNextForImageClassification</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/modeling_convnext.py#L311</source><parameters>[{"name": "config", "val": ""}]</parameters><paramsdesc>- **config** ([ConvNextForImageClassification](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextForImageClassification)) --
  Model configuration class with all the parameters of the model. Initializing with a config file does not
  load the weights associated with the model, only the configuration. Check out the
  [from_pretrained()](/docs/transformers/v4.57.0/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.</paramsdesc><paramgroups>0</paramgroups></docstring>

ConvNext Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.

This model inherits from [PreTrainedModel](/docs/transformers/v4.57.0/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)

This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>transformers.ConvNextForImageClassification.forward</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/modeling_convnext.py#L329</source><parameters>[{"name": "pixel_values", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "labels", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pixel_values** (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`, *optional*) --
  The tensors corresponding to the input images. Pixel values can be obtained using
  [ConvNextImageProcessor](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextImageProcessor). See [ConvNextImageProcessor.__call__()](/docs/transformers/v4.57.0/ja/model_doc/detr#transformers.DetrFeatureExtractor.__call__) for details (`processor_class` uses
  [ConvNextImageProcessor](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextImageProcessor) for processing images).
- **labels** (`torch.LongTensor` of shape `(batch_size,)`, *optional*) --
  Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
  config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
  `config.num_labels > 1` a classification loss is computed (Cross-Entropy).</paramsdesc><paramgroups>0</paramgroups><rettype>[transformers.modeling_outputs.ImageClassifierOutputWithNoAttention](/docs/transformers/v4.57.0/ja/main_classes/output#transformers.modeling_outputs.ImageClassifierOutputWithNoAttention) or `tuple(torch.FloatTensor)`</rettype><retdesc>A [transformers.modeling_outputs.ImageClassifierOutputWithNoAttention](/docs/transformers/v4.57.0/ja/main_classes/output#transformers.modeling_outputs.ImageClassifierOutputWithNoAttention) or a tuple of
`torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various
elements depending on the configuration ([ConvNextConfig](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextConfig)) and inputs.

- **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Classification (or regression if config.num_labels==1) loss.
- **logits** (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`) -- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
  one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also
  called feature maps) of the model at the output of each stage.</retdesc></docstring>
The [ConvNextForImageClassification](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextForImageClassification) forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>







<ExampleCodeBlock anchor="transformers.ConvNextForImageClassification.forward.example">

Example:

```python
>>> from transformers import AutoImageProcessor, ConvNextForImageClassification
>>> import torch
>>> from datasets import load_dataset

>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]

>>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224")
>>> model = ConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224")

>>> inputs = image_processor(image, return_tensors="pt")

>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
...
```

</ExampleCodeBlock>


</div></div>

</pt>
<tf>

## TFConvNextModel[[transformers.TFConvNextModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class transformers.TFConvNextModel</name><anchor>transformers.TFConvNextModel</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/modeling_tf_convnext.py#L493</source><parameters>[{"name": "config", "val": ""}, {"name": "*inputs", "val": ""}, {"name": "add_pooling_layer", "val": " = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** ([ConvNextConfig](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextConfig)) -- Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the [from_pretrained()](/docs/transformers/v4.57.0/ja/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights.</paramsdesc><paramgroups>0</paramgroups></docstring>
The bare ConvNext model outputting raw features without any specific head on top.
This model inherits from [TFPreTrainedModel](/docs/transformers/v4.57.0/ja/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)

This model is also a [keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.

<Tip>

TensorFlow models and layers in `transformers` accept two formats as input:

- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional argument.

The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:

- a single Tensor with `pixel_values` only and nothing else: `model(pixel_values)`
- a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
`model([pixel_values, attention_mask])` or `model([pixel_values, attention_mask, token_type_ids])`
- a dictionary with one or several input Tensors associated to the input names given in the docstring:
`model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})`

Note that when creating models and layers with
[subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
about any of this, as you can just pass inputs like you would to any other Python function!

</Tip>





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>call</name><anchor>transformers.TFConvNextModel.call</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/modeling_tf_convnext.py#L498</source><parameters>[{"name": "pixel_values", "val": ": TFModelInputType | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "training", "val": ": bool = False"}]</parameters><paramsdesc>- **pixel_values** (`np.ndarray`, `tf.Tensor`, `list[tf.Tensor]` ``dict[str, tf.Tensor]` or `dict[str, np.ndarray]` and each example must have the shape `(batch_size, num_channels, height, width)`) --
  Pixel values. Pixel values can be obtained using [AutoImageProcessor](/docs/transformers/v4.57.0/ja/model_doc/auto#transformers.AutoImageProcessor). See
  [ConvNextImageProcessor.__call__()](/docs/transformers/v4.57.0/ja/model_doc/detr#transformers.DetrFeatureExtractor.__call__) for details.

- **output_hidden_states** (`bool`, *optional*) --
  Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
  more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
  used instead.
- **return_dict** (`bool`, *optional*) --
  Whether or not to return a [ModelOutput](/docs/transformers/v4.57.0/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. This argument can be used in
  eager mode, in graph mode the value will always be set to True.</paramsdesc><paramgroups>0</paramgroups><rettype>[transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling](/docs/transformers/v4.57.0/ja/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling) or `tuple(tf.Tensor)`</rettype><retdesc>A [transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling](/docs/transformers/v4.57.0/ja/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling) or a tuple of `tf.Tensor` (if
`return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the
configuration ([ConvNextConfig](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextConfig)) and inputs.

- **last_hidden_state** (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`) -- Sequence of hidden-states at the output of the last layer of the model.
- **pooler_output** (`tf.Tensor` of shape `(batch_size, hidden_size)`) -- Last layer hidden-state of the first token of the sequence (classification token) further processed by a
  Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
  prediction (classification) objective during pretraining.

  This output is usually *not* a good summary of the semantic content of the input, you're often better with
  averaging or pooling the sequence of hidden-states for the whole input sequence.
- **hidden_states** (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
  `(batch_size, sequence_length, hidden_size)`.

  Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- **attentions** (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
  sequence_length)`.

  Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
  heads.</retdesc></docstring>
The [TFConvNextModel](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.TFConvNextModel) forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>







<ExampleCodeBlock anchor="transformers.TFConvNextModel.call.example">

Examples:

```python
>>> from transformers import AutoImageProcessor, TFConvNextModel
>>> from PIL import Image
>>> import requests

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224")
>>> model = TFConvNextModel.from_pretrained("facebook/convnext-tiny-224")

>>> inputs = image_processor(images=image, return_tensors="tf")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
```

</ExampleCodeBlock>

</div></div>

## TFConvNextForImageClassification[[transformers.TFConvNextForImageClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class transformers.TFConvNextForImageClassification</name><anchor>transformers.TFConvNextForImageClassification</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/modeling_tf_convnext.py#L568</source><parameters>[{"name": "config", "val": ": ConvNextConfig"}, {"name": "*inputs", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** ([ConvNextConfig](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextConfig)) -- Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the [from_pretrained()](/docs/transformers/v4.57.0/ja/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights.</paramsdesc><paramgroups>0</paramgroups></docstring>

ConvNext Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.

This model inherits from [TFPreTrainedModel](/docs/transformers/v4.57.0/ja/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)

This model is also a [keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.

<Tip>

TensorFlow models and layers in `transformers` accept two formats as input:

- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional argument.

The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:

- a single Tensor with `pixel_values` only and nothing else: `model(pixel_values)`
- a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
`model([pixel_values, attention_mask])` or `model([pixel_values, attention_mask, token_type_ids])`
- a dictionary with one or several input Tensors associated to the input names given in the docstring:
`model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})`

Note that when creating models and layers with
[subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
about any of this, as you can just pass inputs like you would to any other Python function!

</Tip>





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>call</name><anchor>transformers.TFConvNextForImageClassification.call</anchor><source>https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/convnext/modeling_tf_convnext.py#L584</source><parameters>[{"name": "pixel_values", "val": ": TFModelInputType | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "labels", "val": ": np.ndarray | tf.Tensor | None = None"}, {"name": "training", "val": ": bool | None = False"}]</parameters><paramsdesc>- **pixel_values** (`np.ndarray`, `tf.Tensor`, `list[tf.Tensor]` ``dict[str, tf.Tensor]` or `dict[str, np.ndarray]` and each example must have the shape `(batch_size, num_channels, height, width)`) --
  Pixel values. Pixel values can be obtained using [AutoImageProcessor](/docs/transformers/v4.57.0/ja/model_doc/auto#transformers.AutoImageProcessor). See
  [ConvNextImageProcessor.__call__()](/docs/transformers/v4.57.0/ja/model_doc/detr#transformers.DetrFeatureExtractor.__call__) for details.

- **output_hidden_states** (`bool`, *optional*) --
  Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
  more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
  used instead.
- **return_dict** (`bool`, *optional*) --
  Whether or not to return a [ModelOutput](/docs/transformers/v4.57.0/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. This argument can be used in
  eager mode, in graph mode the value will always be set to True.

- **labels** (`tf.Tensor` or `np.ndarray` of shape `(batch_size,)`, *optional*) --
  Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
  config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
  `config.num_labels > 1` a classification loss is computed (Cross-Entropy).</paramsdesc><paramgroups>0</paramgroups><rettype>[transformers.modeling_tf_outputs.TFSequenceClassifierOutput](/docs/transformers/v4.57.0/ja/main_classes/output#transformers.modeling_tf_outputs.TFSequenceClassifierOutput) or `tuple(tf.Tensor)`</rettype><retdesc>A [transformers.modeling_tf_outputs.TFSequenceClassifierOutput](/docs/transformers/v4.57.0/ja/main_classes/output#transformers.modeling_tf_outputs.TFSequenceClassifierOutput) or a tuple of `tf.Tensor` (if
`return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the
configuration ([ConvNextConfig](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.ConvNextConfig)) and inputs.

- **loss** (`tf.Tensor` of shape `(batch_size, )`, *optional*, returned when `labels` is provided) -- Classification (or regression if config.num_labels==1) loss.
- **logits** (`tf.Tensor` of shape `(batch_size, config.num_labels)`) -- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- **hidden_states** (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
  `(batch_size, sequence_length, hidden_size)`.

  Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- **attentions** (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
  sequence_length)`.

  Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
  heads.</retdesc></docstring>
The [TFConvNextForImageClassification](/docs/transformers/v4.57.0/ja/model_doc/convnext#transformers.TFConvNextForImageClassification) forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>







<ExampleCodeBlock anchor="transformers.TFConvNextForImageClassification.call.example">

Examples:

```python
>>> from transformers import AutoImageProcessor, TFConvNextForImageClassification
>>> import tensorflow as tf
>>> from PIL import Image
>>> import requests

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224")
>>> model = TFConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224")

>>> inputs = image_processor(images=image, return_tensors="tf")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_class_idx = tf.math.argmax(logits, axis=-1)[0]
>>> print("Predicted class:", model.config.id2label[int(predicted_class_idx)])
```

</ExampleCodeBlock>

</div></div>

</tf>
</frameworkcontent>

<EditOnGithub source="https://github.com/huggingface/transformers/blob/main/docs/source/ja/model_doc/convnext.md" />