Custom Auto-Label Download

Superb AI Custom Auto-Label Download and Use

There are some examples that we have made for downloading and using our Custom Auto-Label model.

Getting Started

To begin setting up your environment, look at https://gallery.ecr.aws/r9d1q5w0/cal-examples to help setup your Docker environment.
First, this is the example and code for using our Custom Auto-Label model in a customizable way for developers.

Preparing example data

To begin we will need to prepare the example data and directories:
1
import os
2
from urllib.request import urlretrieve
3
from zipfile import ZipFile
4
5
example_dir = '/work/examples'
6
zip_file = 'pet-classification.zip'
7
model_file = 'pet-classification.h5'
8
info_file = 'pet-classification.json'
9
img_file = 'dog.png'
10
11
os.makedirs(example_dir, exist_ok=True)
12
zip_path = os.path.join(example_dir, zip_file)
13
model_path = os.path.join(example_dir, model_file)
14
info_path = os.path.join(example_dir, info_file)
15
img_path = os.path.join(example_dir, img_file)
16
17
zip_url = f'https://spbai-superb-biz-test.s3.ap-northeast-2.amazonaws.com/cal-examples/{zip_file}'
18
urlretrieve(zip_url, zip_path)
19
20
with ZipFile(zip_path) as f:
21
f.extract(model_file, path=example_dir)
22
f.extract(info_file, path=example_dir)
23
f.extract(img_file, path=example_dir)
Copied!

Loading Custom Auto-Label model

We will need to set our dependencies here as shown below.
1
import tensorflow as tf
2
from PIL import Image
3
4
%matplotlib inline
5
import matplotlib.pyplot as plt
Copied!
Then because running the Custom Auto-Label model requires a GPU we will check for a GPU. In addition, this will set Tensorflow’s memory setting so that the GPU can be used to its full capacity.
1
try:
2
for device in tf.config.list_physical_devices('GPU'):
3
tf.config.experimental.set_memory_growth(device, True)
4
except:
5
passy
Copied!
We will then load the model that we have downloaded.
1
model = tf.keras.models.load_model(model_path)
Copied!
The output will look something like:
1
Tensor("Placeholder:0", shape=(1, 197, 1), dtype=float32) Tensor("Placeholder_1:0", shape=(), dtype=float32)
2
Tensor("Placeholder:0", shape=(1, 197, 768), dtype=float32) Tensor("Placeholder_1:0", shape=(768,), dtype=float32)
3
Tensor("Placeholder:0", shape=(768,), dtype=float32) Tensor("Placeholder_1:0", shape=(1, 197, 768), dtype=float32)
4
Tensor("Placeholder:0", shape=(768,), dtype=float32) Tensor("Placeholder_1:0", shape=(1, 197, 768), dtype=float32)
5
Tensor("Placeholder:0", shape=(1, 12, 197, 197), dtype=float32) Tensor("Placeholder_1:0", shape=(1, 12, 197, 197), dtype=float32)
6
Tensor("Placeholder:0", shape=(768,), dtype=float32) Tensor("Placeholder_1:0", shape=(1, 197, 768), dtype=float32)
7
Tensor("Placeholder:0", shape=(1, 197, 1), dtype=float32) Tensor("Placeholder_1:0", shape=(), dtype=float32)
8
Tensor("Placeholder:0", shape=(1, 197, 768), dtype=float32) Tensor("Placeholder_1:0", shape=(768,), dtype=float32)
9
Tensor("Placeholder:0", shape=(3072,), dtype=float32) Tensor("Placeholder_1:0", shape=(1, 197, 3072), dtype=float32)
10
Tensor("Placeholder:0", shape=(1, 197, 3072), dtype=float32) Tensor("Placeholder_1:0", shape=(), dtype=float32)
11
Tensor("Placeholder:0", shape=(768,), dtype=float32) Tensor("Placeholder_1:0", shape=(1, 197, 768), dtype=float32)
12
Tensor("Placeholder:0", shape=(1, 197, 1), dtype=float32) Tensor("Placeholder_1:0", shape=(), dtype=float32)
13
Tensor("Placeholder:0", shape=(1, 197, 768), dtype=float32) Tensor("Placeholder_1:0", shape=(768,), dtype=float32)
14
Tensor("Placeholder:0", shape=(768,), dtype=float32) Tensor("Placeholder_1:0", shape=(1, 197, 768), dtype=float32)
Copied!
We can check the model summary:
1
model.summary()
Copied!
The output for this is:
1
Model: "aff9616b-704e-4605-bf38-8e34dacdbd98"
2
_________________________________________________________________
3
Layer (type) Output Shape Param #
4
=================================================================
5
input (InputLayer) [(1, None, None, 3)] 0
6
7
tf_op_layer_channel_flip (T (1, None, None, 3) 0
8
ensorFlowOpLayer)
9
10
resize_and_rescale (Sequent (1, 224, 224, 3) 0
11
ial)
12
13
tf_op_layer_subtract_mean ( (1, 224, 224, 3) 0
14
TensorFlowOpLayer)
15
16
tf_op_layer_DivNoNan (Tenso (1, 224, 224, 3) 0
17
rFlowOpLayer)
18
19
tf_op_layer_Transpose (Tens (1, 3, 224, 224) 0
20
orFlowOpLayer)
21
22
backbone (Functional) (1, 197, 768) 85525248
23
24
tf_op_layer_reshape (Tensor (1, 151296) 0
25
FlowOpLayer)
26
27
head (Functional) (1, 3) 38798595
28
29
tf_op_layer_Sigmoid (Tensor (1, 3) 0
30
FlowOpLayer)
31
32
=================================================================
33
Total params: 124,323,843
34
Trainable params: 124,323,843
35
Non-trainable params: 0
36
_________________________________________________________________
Copied!

Running the model

To run our model for the first time we will first need to load the image.
1
pil_img = Image.open(img_path)
2
plt.imshow(pil_img)
3
plt.show()pyth
Copied!
Then to make the image fit the format of the model we have made the function preprocess_image.
1
def preprocess_image(pil_img, input_width, input_height):
2
# Resize PIL image to (w, h) and convert to (1, h, w, 3) tensor
3
resized_img = pil_img.resize((input_width, input_height))
4
tensor_hwc = tf.keras.preprocessing.image.img_to_array(resized_img)
5
tensor_1hwc = tf.expand_dims(tensor_hwc, 0)
6
return tensor_1hwc
Copied!
We will set the input_width and height here. In this example, we have set it to 224, but you can plug in the settings that you need.
1
input_width = input_height = 224
2
tensor_1hwc = preprocess_image(pil_img, input_width, input_height)
3
tensor_1hwc.shape
Copied!
With this all done we will now run our prediction model.
1
prediction = model.predict(tensor_1hwc)
Copied!
The output will look like:
1
Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/261/Mean:0", shape=(1, 197, 1), dtype=float32) Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/263_const2/Const:0", shape=(), dtype=float32)
2
Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/266/Mul:0", shape=(1, 197, 768), dtype=float32) Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/267_const2/Const:0", shape=(768,), dtype=float32)
3
Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/270_const1/Const:0", shape=(768,), dtype=float32) Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/269/Tensordot:0", shape=(1, 197, 768), dtype=float32)
4
Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/285_const1/Const:0", shape=(768,), dtype=float32) Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/284/Tensordot:0", shape=(1, 197, 768), dtype=float32)
5
Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/311/truediv:0", shape=(1, 12, 197, 197), dtype=float32) Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/319_const2/Const:0", shape=(1, 12, 197, 197), dtype=float32)
6
Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/333_const1/Const:0", shape=(768,), dtype=float32) Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/332/Tensordot:0", shape=(1, 197, 768), dtype=float32)
7
Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/341/Mean:0", shape=(1, 197, 1), dtype=float32) Tensor("aff9616b-704e-4605-bf38-8e34dacdbd98/backbone/343_const2/Const:0", shape=(), dtype=float32)
Copied!
For this output if we run ‘prediction[0]’ the output will come out in an array as follows, it is the prediction number for each classes. However, later we will make it so that it is user friendly.
1
array([9.9952102e-01, 4.4798324e-04, 2.0798518e-04], dtype=float32)
Copied!

Measuring latency

Since we have already run our model, we will now check the latency of the model.
1
import time
2
from contextlib import contextmanager
3
4
# For type hints
5
from typing import Iterator, Callable
6
7
@contextmanager
8
def eval_latency() -> Iterator[Callable]:
9
t_start = time.time()
10
timer = lambda: time.time() - t_start
11
yield timer
12
13
with eval_latency() as timer:
14
prediction = model.predict(tensor_1hwc)
15
16
print(f'Elapsed time: {timer():.3f}s')
Copied!
The output for the model looks like:
1
Elapsed time: 0.080s
Copied!

Speeding up inference using tf.function

If you would like to speed up the time that it takes to run the model we have written code that will help you do that. Remember the time that it took before we did this is 0.08s (we performed this task with the same environment (NVIDIA TITAN RTX).
Here is the code that sets up the model so that it will run faster with the ‘tf.function’
1
from functools import partial
2
3
def predict_wrapper(pil_img, model):
4
return model(pil_img)
5
6
# Wrap the model for faster computation using tf.function
7
predict_tf_function = tf.function(partial(predict_wrapper, model=model))
8
9
# Preload the function
10
dummy_input = tf.zeros((1, input_height, input_width, 3))
11
_ = predict_tf_function(dummy_input)
Copied!
Now we will run our fast_predict function to check how much faster our model ran.
1
def fast_predict(tensor_1hwc):
2
prediction = predict_tf_function(tensor_1hwc)
3
return prediction[0].numpy()
4
5
with eval_latency() as timer:
6
prediction = fast_predict(tensor_1hwc)
7
8
print(f'Elapsed time: {timer():.3f}s')
Copied!
The output for this looks like:
1
Elapsed time: 0.017s
Copied!
We can see here that before we sped up the model the runtime was 0.08s now it is 0.017s.

Decorating outputs with model info

Now that we have run our models and checked for its latency, we now provide functions that help the user get the information that they need.
First, we can look at the json that comes from running our model as follows.
1
import os, json
2
3
model_info_path = os.path.splitext(model_path)[0] + '.json'
4
model_info = json.load(open(model_info_path))
5
model_info
Copied!
The output for this looks like:
1
{'name': 'Pet Classification',
2
'category': 'Pet',
3
'type': 'radio',
4
'options': [{'name': 'Dog'}, {'name': 'Cat'}, {'name': 'Other'}],
5
'performances': [{'name': 'Dog',
6
'precision': 0.9996667777407531,
7
'recall': 0.9996667777407531,
8
'f_score': 0.9996667727407532},
9
{'name': 'Cat',
10
'precision': 0.9990009990009991,
11
'recall': 0.9990009990009991,
12
'f_score': 0.9990009940009992},
13
{'name': 'Other', 'precision': 0.0, 'recall': 0.0, 'f_score': 0.0}]}
Copied!
Now, we provide two functions that provide the class with the highest confidence score, and all the classes with the confidence score.
1)
1
def get_top1_class(prediction, model_info):
2
cls_index = prediction.argmax()
3
cls = model_info['options'][cls_index]
4
confidence = prediction[cls_index]
5
return { 'name': cls['name'], 'confidence': confidence }
6
7
get_top1_class(prediction, model_info)
Copied!
1
{'name': 'Dog', 'confidence': 0.999521}
Copied!
2)
1
def get_all_classes(prediction, model_info):
2
return [
3
{ 'name': cls['name'], 'confidence': confidence }
4
for cls, confidence in zip(model_info['options'], prediction)
5
if confidence > cls.get('score_thres', 0)
6
]
7
8
get_all_classes(prediction, model_info)
Copied!
1
[{'name': 'Dog', 'confidence': 0.999521},
2
{'name': 'Cat', 'confidence': 0.00044798324},
3
{'name': 'Other', 'confidence': 0.00020798518}]
Copied!
The whole file for the example can be found in the link below.

Using Custom Auto-Label-examples package: KerasModel example

We have created a Python package named cal-examples to help someone use there own Custom Auto-Label models more conveniently. The package is installed in this docker image, located at /root/.local/lib/python3.7/site-packages/cal_examples/, which is accessible via the bash shell.

Preparation

1
from PIL import Image
2
from cal_examples.models import KerasModel
3
from cal_examples.utils import eval_latency
Copied!
1
try:
2
for device in tf.config.list_physical_devices('GPU'):
3
tf.config.experimental.set_memory_growth(device, True)
4
except:
5
passpyt
Copied!
1
import os
2
3
example_dir = '/work/examples'
4
model_path = os.path.join(example_dir, 'pet-classification.h5')
5
img_path = os.path.join(example_dir, 'dog.png')
Copied!

Loading Custom Auto-Label model

We will load the KerasModel here as follows. The output will look similar to the output of the previous case (loading the Custom Auto-Label model).
1
model = KerasModel(model_path)
Copied!
Similar to the example above. The user can call the summary of the model with the code.
1
model.model.summary()
Copied!
You may not recognize a big difference from this to the example above till here.
However, with our package there is now no need to load the json file separately to run get the model_info. Simply run:
1
model.model_info
Copied!

Getting output info with our custom package

Since we have already run our model, we will now check the latency of the model.
1
pil_img = Image.open(img_path)
2
3
with eval_latency() as timer:
4
np_prediction = model.predict(pil_img)
5
6
print(f'Elapsed time: {timer():.3f}s')
Copied!
There is no need to have separate functions for this with our package. you can simply call:
1
print('Top-1 class:', model.get_top1_class(np_prediction))
2
print('All classes:', model.get_all_classes(np_prediction))
Copied!
To get the output:
1
Top-1 class: {'name': 'Dog', 'confidence': 0.999521}
2
All classes: [{'name': 'Dog', 'confidence': 0.999521}, {'name': 'Cat', 'confidence': 0.00044798324}, {'name': 'Other', 'confidence': 0.00020798518}]
Copied!
The whole file for the example can be found in the link below

Using Custom Auto-Label-examples package: GradCamModel Example

This example is very similar to the Keras model example above, but adds a heat map to show what the model focused on the most when making its prediction.

Preparation

1
from PIL import Image
2
from cal_examples.models import GradCamModel
3
from cal_examples.utils import eval_latency, generate_heatmap_image
4
5
%matplotlib inline
6
import matplotlib.pyplot as plt
Copied!
1
try:
2
for device in tf.config.list_physical_devices('GPU'):
3
tf.config.experimental.set_memory_growth(device, True)
4
except:
5
pass
Copied!
1
import os
2
3
example_dir = '/work/examples'
4
model_path = os.path.join(example_dir, 'pet-classification.h5')
5
img_path = os.path.join(example_dir, 'dog.png')
Copied!

Loading Custom Auto-Label model

1
model = GradCamModel(model_path)
Copied!

Running the model

After you have run the model you can check for the summary and model_info with as you did with the Keras model.
The difference is that you can now run the model with predict_with_gradcam model.
1
pil_img = Image.open(img_path)
2
3
with eval_latency() as timer:
4
np_prediction, np_heatmap_0to1 = model.predict_with_gradcam(pil_img)
5
6
print(f'Elapsed time: {timer():.3f}s')
Copied!
You can now check the confidence level as you did with the Keras model with our package.
1
print('Top-1 class:', model.get_top1_class(np_prediction))
2
print('All classes:', model.get_all_classes(np_prediction))
Copied!
Here you can look at the heat map of the image with the following code.
1
pil_cam_img = generate_heatmap_image(pil_img, np_heatmap_0to1)
2
3
plt.imshow(pil_cam_img)
4
plt.show()
Copied!
The output will look like:
The whole file for the example can be found in the link below
Any other questions? E-mail us at [email protected].