MobileNet V1
This model was released on 2017-04-17 and added to Hugging Face Transformers on 2022-11-21.
MobileNet V1
Section titled “MobileNet V1”MobileNet V1 is a family of efficient convolutional neural networks optimized for on-device or embedded vision tasks. It achieves this efficiency by using depth-wise separable convolutions instead of standard convolutions. The architecture allows for easy trade-offs between latency and accuracy using two main hyperparameters, a width multiplier (alpha) and an image resolution multiplier.
You can all the original MobileNet checkpoints under the Google organization.
The example below demonstrates how to classify an image with Pipeline or the AutoModel class.
import torchfrom transformers import pipeline
pipeline = pipeline( task="image-classification", model="google/mobilenet_v1_1.0_224", dtype=torch.float16, device=0)pipeline("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")import torchimport requestsfrom PIL import Imagefrom transformers import AutoModelForImageClassification, AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained( "google/mobilenet_v1_1.0_224",)model = AutoModelForImageClassification.from_pretrained( "google/mobilenet_v1_1.0_224",)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = Image.open(requests.get(url, stream=True).raw)inputs = image_processor(image, return_tensors="pt")
with torch.no_grad(): logits = model(**inputs).logitspredicted_class_id = logits.argmax(dim=-1).item()
class_labels = model.config.id2labelpredicted_class_label = class_labels[predicted_class_id]print(f"The predicted class label is: {predicted_class_label}")-
Checkpoint names follow the pattern
mobilenet_v1_{depth_multiplier}_{resolution}, likemobilenet_v1_1.0_224.1.0is the depth multiplier and224is the image resolution. -
While trained on images of a specific sizes, the model architecture works with images of different sizes (minimum 32x32). The
MobileNetV1ImageProcessorhandles the necessary preprocessing. -
MobileNet is pretrained on ImageNet-1k, a dataset with 1000 classes. However, the model actually predicts 1001 classes. The additional class is an extra “background” class (index 0).
-
The original TensorFlow checkpoints determines the padding amount at inference because it depends on the input image size. To use the native PyTorch padding behavior, set
tf_padding=FalseinMobileNetV1Config.from transformers import MobileNetV1Configconfig = MobileNetV1Config.from_pretrained("google/mobilenet_v1_1.0_224", tf_padding=True) -
The Transformers implementation does not support the following features.
- Uses global average pooling instead of the optional 7x7 average pooling with stride 2. For larger inputs, this gives a pooled output that is larger than a 1x1 pixel.
- Does not support other
output_stridevalues (fixed at 32). For smalleroutput_strides, the original implementation uses dilated convolution to prevent spatial resolution from being reduced further. (which would require dilated convolutions). output_hidden_states=Truereturns all intermediate hidden states. It is not possible to extract the output from specific layers for other downstream purposes.- Does not include the quantized models from the original checkpoints because they include “FakeQuantization” operations to unquantize the weights.
MobileNetV1Config
Section titled “MobileNetV1Config”[[autodoc]] MobileNetV1Config
MobileNetV1ImageProcessor
Section titled “MobileNetV1ImageProcessor”[[autodoc]] MobileNetV1ImageProcessor - preprocess
MobileNetV1ImageProcessorFast
Section titled “MobileNetV1ImageProcessorFast”[[autodoc]] MobileNetV1ImageProcessorFast - preprocess
MobileNetV1Model
Section titled “MobileNetV1Model”[[autodoc]] MobileNetV1Model - forward
MobileNetV1ForImageClassification
Section titled “MobileNetV1ForImageClassification”[[autodoc]] MobileNetV1ForImageClassification - forward