# TorchScript Sample Inference Scripts

In the following pages we provide sample scripts which can be used to run TorchScript models in python. Please keep in mind that these models can also be run in C++ using the TorchScript API.

{% hint style="info" %}
Please also note that if you require smaller models, faster models, or models made specifically for mobile devices, you may want to go back to model playground, and choose different architectures, use smaller images, lower model parameters etc to optimize runtime and/or memory usage as needed.
{% endhint %}

If you note a discrepancy between the metrics reported in model playground and from your deployed model, it is entirely possible you are not using the correct image transforms.

We recommend looking at the "config.yaml" file to see the transforms you used for validation/testing, and using the excellent [albumentations](https://github.com/albumentations-team/albumentations) library which provides almost all of them.

Please note that you have to replicate/implement them if you are deploying to an environment where albumentations is not available. You can read about [using and building the transformations](https://wiki.cloudfactory.com/docs/userdocs/model-playground/image-transformations) on the corresponding pag&#x65;**.**

* Sample Inference for Attribute Prediction
* Sample Inference for Classification
* Sample inference for Image-Tagging
* Sample inference for Object Detection
* Sample inference for Instance Segmentation
* Sample inference for Semantic Segmentation


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cloudfactory.com/model-playground/torchscript-sample-inference-scripts.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
