Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update test_examples #3195

Open
wants to merge 8 commits into
base: develop
Choose a base branch
from

Conversation

andrey-churkin
Copy link
Contributor

Changes

Add quantization_aware_training_tensorflow_mobilenet_v2 to test scope

@andrey-churkin andrey-churkin requested a review from a team as a code owner January 17, 2025 09:45
Copy link
Contributor

@alexsu52 alexsu52 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update test duration

and provide a link with passed test_examples

@andrey-churkin
Copy link
Contributor Author

Build # 645 (NNCF/nightly/test_examples)
tests.cross_fw.examples.test_examples.test_examples[quantization_aware_training_tensorflow_mobilenet_v2] -- 25 min

@andrey-churkin
Copy link
Contributor Author

Please update test duration

and provide a link with passed test_examples

@alexsu52 Done, Please review

@andrey-churkin
Copy link
Contributor Author

Build # 646 (with batch_size = 32)

@andrey-churkin
Copy link
Contributor Author

batch_size learning_rate optimizer metric (int8_top1)
16 1e-5 RMSProp 0.9673885107040405
32 1e-5 RMSprop 0.971464991569519
64 1e-5 RMSprop 0.971464991569519
128 1e-5 RMSprop 0.9712101817131042
16 1e-4 RMSprop 0.9449681639671326
32 1e-4 RMSprop 0.9518471360206604
64 1e-4 RMSprop 0.9727388620376587
128 1e-4 RMSprop 0.9678980708122253
batch_size learning_rate optimizer metric (int8_top1)
16 1e-5 Adam 0.9691720008850098
32 1e-5 Adam 0.971464991569519
64 1e-5 Adam 0.9712101817131042
128 1e-5 Adam 0.9727388620376587
16 1e-4 Adam 0.9296815395355225
32 1e-4 Adam 0.95286625623703
64 1e-4 Adam 0.9656050801277161
128 1e-4 Adam 0.9643312096595764

Copy link
Contributor

@alexsu52 alexsu52 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=[tf.keras.metrics.CategoricalAccuracy()],
)

tf_quantized_model.fit(train_dataset, epochs=3, verbose=1)
tf_quantized_model.fit(train_dataset, epochs=1, verbose=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
tf_quantized_model.fit(train_dataset, epochs=1, verbose=1)
# To minimize the example's runtime, we train for only 1 epoch. This is sufficient to demonstrate
# that the quantized model produced by QAT is more accurate than the one produced by PTQ.
# However, training for more than 1 epoch would further improve the quantized model's accuracy.
tf_quantized_model.fit(train_dataset, epochs=1, verbose=1)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NNCF/nightly/test_examples: Build # 648

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PTQ drop: ~0.018
QAT drop: ~0.014

@andrey-churkin
Copy link
Contributor Author

New results:

CPU:

batch_size learning_rate optimizer metric (int8_top1)
16 1e-5 RMSProp 0.9678980708122253
32 1e-5 RMSprop 0.9719745516777039
64 1e-5 RMSprop 0.9737579822540283
128 1e-5 RMSprop 0.9704458713531494
16 1e-4 RMSprop 0.9492993354797363
32 1e-4 RMSprop 0.9582165479660034
64 1e-4 RMSprop 0.9607643485069275
128 1e-4 RMSprop 0.9699363112449646

1 GPU

batch_size learning_rate optimizer metric (int8_top1)
16 1e-5 RMSProp 0.968917191028595
32 1e-5 RMSprop 0.9729936122894287
64 1e-5 RMSprop 0.9717197418212891
128 1e-5 RMSprop 0.9709554314613342
16 1e-4 RMSprop 0.9475159049034119
32 1e-4 RMSprop 0.9615286588668823
64 1e-4 RMSprop 0.9666242003440857
128 1e-4 RMSprop 0.9681528806686401

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants