Project in Mathematical Modelling¶
Importing Libraries¶
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import time
from sklearn.manifold import TSNE
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.metrics import classification_report, confusion_matrix
import tensorflow as tf
from tensorflow.keras.models import Sequential, load_model, Model
from tensorflow.keras.layers import Dense, Dropout, Input
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from tensorflow.keras.metrics import Precision, Recall
# Set random seed for reproducibility
np.random.seed(23200555)
tf.random.set_seed(23200555)
We'll be loading our dataset using a .feather format file instead of a .csv due to its fast read/write capabilities.
# Load data from either .csv or .feather
currency_data = pd.read_feather("data/banknote_net.feather") # feather is faster and more robust than csv.
currency_data.head()
| v_0 | v_1 | v_2 | v_3 | v_4 | v_5 | v_6 | v_7 | v_8 | v_9 | ... | v_248 | v_249 | v_250 | v_251 | v_252 | v_253 | v_254 | v_255 | Currency | Denomination | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0.423395 | 0.327657 | 2.568988 | 3.166228 | 4.801421 | 5.531792 | 2.458083 | 1.218453 | 0.000000 | 1.116785 | ... | 0.000000 | 2.273451 | 5.790633 | 0.000000 | 0.000000 | 0.0 | 5.635400 | 0.000000 | AUD | 100_1 |
| 1 | 1.158823 | 1.669602 | 3.638447 | 2.823524 | 4.839890 | 2.777757 | 0.753350 | 0.764005 | 0.347871 | 1.928572 | ... | 0.000000 | 2.329623 | 3.516146 | 0.000000 | 0.000000 | 0.0 | 2.548191 | 1.053410 | AUD | 100_1 |
| 2 | 0.000000 | 0.958235 | 4.706119 | 1.688242 | 3.312702 | 4.516483 | 0.000000 | 1.876461 | 2.250795 | 1.883192 | ... | 0.811282 | 5.591417 | 1.879267 | 0.641139 | 0.571079 | 0.0 | 1.861483 | 2.172145 | AUD | 100_1 |
| 3 | 0.920511 | 1.820294 | 3.939334 | 3.206829 | 6.253655 | 0.942557 | 2.952453 | 0.000000 | 2.064298 | 1.367196 | ... | 1.764936 | 3.415151 | 2.518404 | 0.582229 | 1.105192 | 0.0 | 1.566918 | 0.533945 | AUD | 100_1 |
| 4 | 0.331918 | 0.000000 | 3.330771 | 3.023437 | 4.369099 | 5.177336 | 1.499362 | 0.590646 | 0.553625 | 1.405708 | ... | 0.000000 | 4.615945 | 4.825463 | 0.302261 | 0.378229 | 0.0 | 2.710654 | 0.325945 | AUD | 100_1 |
5 rows × 258 columns
The first 256 columns correspond to image embeddings and the last two columns contain the associated currency and denomination of the note.
# Total number of images
print(f"Total number of images is {currency_data.shape[0]}")
# Unique number of currencies
print(f"Total number of currencies is {currency_data.Currency.unique().shape[0]}")
# Unique number of denominations (including back and front of each banknote)
combined_series = currency_data.Currency + currency_data.Denomination # combination of currency and denomination
print(f"Total number of denominations is {int(len(combined_series.unique()) / 2)}")
# Inspect data structure
currency_data.head(10)
Total number of images is 24826 Total number of currencies is 17 Total number of denominations is 112
| v_0 | v_1 | v_2 | v_3 | v_4 | v_5 | v_6 | v_7 | v_8 | v_9 | ... | v_248 | v_249 | v_250 | v_251 | v_252 | v_253 | v_254 | v_255 | Currency | Denomination | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0.423395 | 0.327657 | 2.568988 | 3.166228 | 4.801421 | 5.531792 | 2.458083 | 1.218453 | 0.000000 | 1.116785 | ... | 0.000000 | 2.273451 | 5.790633 | 0.000000 | 0.000000 | 0.000000 | 5.635400 | 0.000000 | AUD | 100_1 |
| 1 | 1.158823 | 1.669602 | 3.638447 | 2.823524 | 4.839890 | 2.777757 | 0.753350 | 0.764005 | 0.347871 | 1.928572 | ... | 0.000000 | 2.329623 | 3.516146 | 0.000000 | 0.000000 | 0.000000 | 2.548191 | 1.053410 | AUD | 100_1 |
| 2 | 0.000000 | 0.958235 | 4.706119 | 1.688242 | 3.312702 | 4.516483 | 0.000000 | 1.876461 | 2.250795 | 1.883192 | ... | 0.811282 | 5.591417 | 1.879267 | 0.641139 | 0.571079 | 0.000000 | 1.861483 | 2.172145 | AUD | 100_1 |
| 3 | 0.920511 | 1.820294 | 3.939334 | 3.206829 | 6.253655 | 0.942557 | 2.952453 | 0.000000 | 2.064298 | 1.367196 | ... | 1.764936 | 3.415151 | 2.518404 | 0.582229 | 1.105192 | 0.000000 | 1.566918 | 0.533945 | AUD | 100_1 |
| 4 | 0.331918 | 0.000000 | 3.330771 | 3.023437 | 4.369099 | 5.177336 | 1.499362 | 0.590646 | 0.553625 | 1.405708 | ... | 0.000000 | 4.615945 | 4.825463 | 0.302261 | 0.378229 | 0.000000 | 2.710654 | 0.325945 | AUD | 100_1 |
| 5 | 0.000000 | 0.579322 | 3.951283 | 2.789169 | 4.397989 | 5.207006 | 1.094531 | 0.967335 | 1.249324 | 1.639024 | ... | 0.159039 | 4.868283 | 4.599572 | 0.941216 | 0.704969 | 0.000000 | 2.232955 | 1.204827 | AUD | 100_1 |
| 6 | 0.000000 | 0.267766 | 3.068679 | 1.999993 | 4.180965 | 4.987628 | 1.141044 | 0.675234 | 0.454930 | 1.218580 | ... | 0.000000 | 4.023349 | 3.999944 | 0.754004 | 0.383292 | 0.000000 | 2.701403 | 1.770314 | AUD | 100_1 |
| 7 | 0.000000 | 0.884940 | 4.487358 | 3.647301 | 2.086322 | 4.824439 | 0.184787 | 1.605425 | 1.769273 | 1.486702 | ... | 0.721975 | 6.629235 | 3.773829 | 0.000000 | 1.486893 | 0.511956 | 2.496674 | 0.337508 | AUD | 100_1 |
| 8 | 0.000000 | 0.638652 | 6.130018 | 3.259331 | 2.874515 | 4.259819 | 0.948092 | 0.948465 | 1.303086 | 1.145841 | ... | 1.677202 | 5.358987 | 3.290113 | 0.000000 | 0.354480 | 0.000000 | 2.524134 | 0.792196 | AUD | 100_1 |
| 9 | 0.000000 | 0.744923 | 3.864101 | 0.600698 | 2.609252 | 4.821238 | 1.397394 | 0.000000 | 1.283926 | 1.121008 | ... | 0.531523 | 3.229470 | 3.241802 | 0.000000 | 1.377622 | 0.000000 | 1.475371 | 0.828369 | AUD | 100_1 |
10 rows × 258 columns
Furthermore, The Denomination column also gives information about whether the image is of note's frontface or rearface i.e. 100_1 corresponds to a frontface image whereas 100_2 corresponds to a rear image.
Exploratory Data Analysis¶
currency_data.Currency.value_counts()
| count | |
|---|---|
| Currency | |
| TRY | 2888 |
| BRL | 2078 |
| INR | 1921 |
| EUR | 1905 |
| JPY | 1658 |
| AUD | 1616 |
| USD | 1604 |
| MYR | 1202 |
| IDR | 1164 |
| PHP | 1164 |
| CAD | 1162 |
| NZD | 1156 |
| PKR | 1131 |
| MXN | 1122 |
| GBP | 1108 |
| SGD | 1015 |
| NNR | 932 |
# Extract image feature embeddings
features = currency_data.drop(['Currency', 'Denomination'], axis=1)
# Save currency labels for later use in plotting
labels = currency_data['Currency']
T-SNE¶
start=time.time()
# t-SNE transformation
tsne = TSNE(n_components=2, perplexity=25, n_iter=1000, verbose=1)
tsne_results = tsne.fit_transform(features)
end=time.time()
print("Time taken to execute",end-start)
[t-SNE] Computing 76 nearest neighbors... [t-SNE] Indexed 24826 samples in 0.140s... [t-SNE] Computed neighbors for 24826 samples in 33.671s... [t-SNE] Computed conditional probabilities for sample 1000 / 24826 [t-SNE] Computed conditional probabilities for sample 2000 / 24826 [t-SNE] Computed conditional probabilities for sample 3000 / 24826 [t-SNE] Computed conditional probabilities for sample 4000 / 24826 [t-SNE] Computed conditional probabilities for sample 5000 / 24826 [t-SNE] Computed conditional probabilities for sample 6000 / 24826 [t-SNE] Computed conditional probabilities for sample 7000 / 24826 [t-SNE] Computed conditional probabilities for sample 8000 / 24826 [t-SNE] Computed conditional probabilities for sample 9000 / 24826 [t-SNE] Computed conditional probabilities for sample 10000 / 24826 [t-SNE] Computed conditional probabilities for sample 11000 / 24826 [t-SNE] Computed conditional probabilities for sample 12000 / 24826 [t-SNE] Computed conditional probabilities for sample 13000 / 24826 [t-SNE] Computed conditional probabilities for sample 14000 / 24826 [t-SNE] Computed conditional probabilities for sample 15000 / 24826 [t-SNE] Computed conditional probabilities for sample 16000 / 24826 [t-SNE] Computed conditional probabilities for sample 17000 / 24826 [t-SNE] Computed conditional probabilities for sample 18000 / 24826 [t-SNE] Computed conditional probabilities for sample 19000 / 24826 [t-SNE] Computed conditional probabilities for sample 20000 / 24826 [t-SNE] Computed conditional probabilities for sample 21000 / 24826 [t-SNE] Computed conditional probabilities for sample 22000 / 24826 [t-SNE] Computed conditional probabilities for sample 23000 / 24826 [t-SNE] Computed conditional probabilities for sample 24000 / 24826 [t-SNE] Computed conditional probabilities for sample 24826 / 24826 [t-SNE] Mean sigma: 4.468774 [t-SNE] KL divergence after 250 iterations with early exaggeration: 82.053993 [t-SNE] KL divergence after 1000 iterations: 1.326969 Time taken to execute 252.5745358467102
# Create a DataFrame containing the 2D coordinates of t-SNE embeddings
tsne_df = pd.DataFrame({
'tsne_1': tsne_results[:, 0],
'tsne_2': tsne_results[:, 1],
'Currency': labels
})
# Plotting
plt.figure(figsize=(16, 10))
sns.scatterplot(
x='tsne_1', y='tsne_2',
hue='Currency',
palette=sns.color_palette("hsv", len(tsne_df['Currency'].unique())),
data=tsne_df,
legend="full",
alpha=0.6
)
plt.title('t-SNE visualization of Currency Embeddings')
plt.show()
- From the above t-SNE visualization: The plot vividly illustrates the distinctive clusters formed by different currencies within the dataset, with each cluster color corresponding to a specific currency.
- Clear Segregation of Currencies: This clustering effect underscores the dataset's well-defined segregation of currencies, highlighting the variations within each currency in terms of denominations.
- Representation of Denominations: Each visible grouping represents a different denomination of a currency, either the front or back of the banknote, showcasing the model's ability to capture and separate intricate features inherent to each currency type.
Data Preprocessing¶
The dataset was meticulously preprocessed by Microsoft, ensuring uniformity and readiness for direct use in analytical models without further adjustments.
For this project, we concentrate solely on embeddings of Euro banknotes, subsetting the dataset accordingly to tailor the model's training on denominations and face classification of this currency.
# Filter out Euro notes
filter=currency_data.Currency=="EUR"
currency_data=currency_data[filter]
currency_data
| v_0 | v_1 | v_2 | v_3 | v_4 | v_5 | v_6 | v_7 | v_8 | v_9 | ... | v_248 | v_249 | v_250 | v_251 | v_252 | v_253 | v_254 | v_255 | Currency | Denomination | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 4856 | 0.733504 | 0.638422 | 1.598714 | 1.681754 | 6.540605 | 0.000000 | 1.459411 | 0.000000 | 2.079704 | 4.959981 | ... | 3.915681 | 0.647658 | 0.0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.393929 | EUR | 100_1 |
| 4857 | 0.913031 | 2.295318 | 3.801070 | 1.986009 | 7.242356 | 0.104051 | 1.216013 | 0.000000 | 4.594367 | 3.125084 | ... | 3.334404 | 3.552105 | 0.0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.487242 | EUR | 100_1 |
| 4858 | 0.998445 | 3.255729 | 6.250493 | 1.114518 | 5.722714 | 0.000000 | 0.000000 | 0.388997 | 4.068294 | 2.752516 | ... | 1.214918 | 0.709190 | 0.0 | 2.121659 | 0.000000 | 0.000000 | 0.000000 | 1.330791 | EUR | 100_1 |
| 4859 | 0.342266 | 2.040487 | 3.134018 | 0.882430 | 5.532273 | 2.364588 | 0.888798 | 1.168506 | 2.759583 | 4.384267 | ... | 1.271958 | 1.022121 | 0.0 | 0.678421 | 0.000000 | 0.000000 | 0.000000 | 1.388966 | EUR | 100_1 |
| 4860 | 0.208291 | 0.635860 | 0.804426 | 0.791553 | 5.789040 | 1.370758 | 2.433199 | 0.000000 | 0.503870 | 5.413804 | ... | 3.902302 | 2.037109 | 0.0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.651038 | EUR | 100_1 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 6756 | 0.000000 | 0.590744 | 1.445540 | 0.000000 | 1.189507 | 0.000000 | 0.279621 | 0.000000 | 0.240242 | 0.513929 | ... | 2.422807 | 2.923299 | 0.0 | 1.734571 | 0.000000 | 0.751411 | 0.786313 | 0.000000 | EUR | 5_2 |
| 6757 | 0.000000 | 1.369577 | 0.869589 | 0.000000 | 1.566690 | 0.000000 | 0.000000 | 0.000000 | 0.532411 | 1.853675 | ... | 0.000000 | 2.350859 | 0.0 | 1.979731 | 0.000000 | 0.307194 | 0.188176 | 0.072723 | EUR | 5_2 |
| 6758 | 0.000000 | 1.885938 | 0.000000 | 0.527718 | 1.326268 | 0.026695 | 0.737375 | 0.000000 | 0.859965 | 0.987305 | ... | 1.401858 | 1.605387 | 0.0 | 0.347403 | 0.000000 | 1.019751 | 0.280304 | 1.130789 | EUR | 5_2 |
| 6759 | 0.000000 | 1.034726 | 0.000000 | 0.000000 | 0.000000 | 0.532541 | 1.553406 | 0.000000 | 0.000000 | 0.899382 | ... | 1.328766 | 1.758764 | 0.0 | 1.333710 | 0.535388 | 0.845100 | 0.000000 | 0.218250 | EUR | 5_2 |
| 6760 | 0.415861 | 0.678737 | 0.000000 | 0.000000 | 0.905419 | 0.000000 | 0.234939 | 0.000000 | 0.105213 | 1.388560 | ... | 1.129161 | 1.910114 | 0.0 | 0.940643 | 0.000000 | 0.546086 | 1.110229 | 0.000000 | EUR | 5_2 |
1905 rows × 258 columns
Let's summarize the distribution of unique denomination-face frequencies from the filtered dataset. This analysis will assist us in assessing class balance, which is crucial for ensuring robust model training and performance.
# Count the occurrences of each denomination
denomination_counts = currency_data['Denomination'].value_counts().reset_index()
denomination_counts.columns = ['Denomination', 'Count']
# Create a bar plot
plt.figure(figsize=(12, 8))
sns.barplot(data=denomination_counts, x='Denomination', y='Count', palette='coolwarm',hue='Denomination')
plt.title('Count of Euro Denominations')
plt.xlabel('Denomination')
plt.ylabel('Count')
plt.xticks(rotation=45) # Rotate labels to make them readable
plt.show()
- Euro Denomination Overview: The bar chart vividly outlines the frequency of each Euro denomination, offering insights into the dataset's structure.
- Balanced Data Profile: Denominations below 100 Euros show a relatively even distribution, promoting balanced training for the model.
- Emphasis on Higher Values: Higher denominations such as the 100 and 200 Euro notes are more frequently represented, which might enhance the model's accuracy for these values.
# Load and shuffle the dataset
currency_data = currency_data.sample(frac=1, random_state=23200555).reset_index(drop=True)
# Specify the target column
target_column = 'Denomination'
features = currency_data.drop(currency_data.columns[-2:], axis=1)
labels = currency_data[target_column].astype("category")
# Get input shape and number of classes
input_shape = features.shape[1]
num_classes = labels.unique().shape[0]
# Split the data into training and testing sets with stratification
X_train, X_test, y_train, y_test = train_test_split(
features, labels, test_size=0.2, random_state=23200555, stratify=labels
)
# One hot encoding y_train and y_test
y_train = pd.get_dummies(y_train)
y_test = pd.get_dummies(y_test)
Model Definition¶
def create_model(units=128, dropout_rate=0.1):
model = Sequential([
Input(shape=(input_shape,)),
Dense(units, activation='relu'),
Dropout(dropout_rate),
Dense(num_classes, activation='softmax') # Softmax for multi-class classification
])
model.compile(
optimizer=Adam(learning_rate=1e-3),
loss='categorical_crossentropy', # Use categorical cross-entropy for one-hot encoded labels
metrics=['accuracy', Precision(name='precision'), Recall(name='recall')]
)
return model
The denomination neural network classifier is defined with a straightforward architecture suited for multi-class classification:
- Input Layer: Accepts input with a shape corresponding to the number of features.
- Dense Layer: A fully connected layer with ReLU activation. The number of neurons is configurable.
- Dropout Layer: Helps in preventing overfitting by randomly setting a fraction of input units to 0 at each update during training. The rate is adjustable.
- Output Layer: A dense layer with softmax activation to output probabilities for each class.
Compilation:
- The model is compiled using the Adam optimizer with a learning rate of 0.001.
- The loss function is categorical crossentropy, which is appropriate for multi-class classification where labels are one-hot encoded.
- Accuracy, precision, and recall are included as metrics to monitor the model's performance during training.
Fine-Tuning Model Parameters¶
Understanding Hyperparameters¶
Before diving into the optimization process, let's break down the hyperparameters we are focusing on:
- Neurons in Hidden Layers: Determines the capacity of the model to learn complex patterns. More neurons can capture detailed features but may also lead to overfitting.
- Dropout Rate: Helps prevent the model from becoming too reliant on any single neuron by randomly dropping units during training. This enhances the model's ability to generalize to new data.
We will systematically tune these parameters to find the best configuration that balances learning depth with the ability to generalize, crucial for the reliable recognition of Euro denominations.
def calculate_f1_score(precision, recall):
if (precision + recall) == 0:
return 0 # Avoid division by zero
return 2 * (precision * recall) / (precision + recall)
# Define batch size
batch_size = 128
# Define the grid of hyperparameters to search
units_options = np.arange(64, 513, 64)
dropout_options = np.arange(0.1, 0.6, 0.1)
# Initialize variables to store the best configuration
best_f1_score = 0
best_units = None
best_dropout = None
best_model = None
results = []
start_time = time.time()
# Stratified K-Fold cross-validation
skf = StratifiedKFold(n_splits=4, shuffle=True, random_state=23200555)
for units in units_options:
for dropout in dropout_options:
f1_scores = []
val_accuracies = []
val_losses = []
print(f"Training model with {units} units and {dropout:.2f} dropout rate")
for train_index, val_index in skf.split(X_train, y_train.idxmax(axis=1)):
X_train_fold, X_val_fold = X_train.iloc[train_index], X_train.iloc[val_index]
y_train_fold, y_val_fold = y_train.iloc[train_index], y_train.iloc[val_index]
# initialize the model with hyperparameters
model = create_model(units=units, dropout_rate=dropout)
early_stop_acc = EarlyStopping(monitor="val_accuracy", patience=5, restore_best_weights=True)
early_stop_loss = EarlyStopping(monitor="val_loss", patience=3, restore_best_weights=True)
history = model.fit(
X_train_fold, y_train_fold,
validation_data=(X_val_fold, y_val_fold),
epochs=30, batch_size=batch_size, callbacks=[early_stop_acc, early_stop_loss],verbose=0
)
val_accuracy = max(history.history['val_accuracy'])
val_loss = min(history.history['val_loss'])
# Retrieve and calculate F1 score using specific keys for precision and recall
precision = history.history['val_precision'][-1]
recall = history.history['val_recall'][-1]
f1 = calculate_f1_score(precision, recall)
val_accuracies.append(val_accuracy)
val_losses.append(val_loss)
f1_scores.append(f1)
# Average validation accuracy and loss over the folds
avg_val_accuracy = np.mean(val_accuracies)
avg_val_loss = np.mean(val_losses)
# Average F1 score across folds
avg_f1_score = np.mean(f1_scores)
results.append((units, dropout, avg_val_accuracy, avg_val_loss, avg_f1_score))
print(f"Validation F1 Score: {avg_f1_score:.4f}")
print(f"Validation Accuracy: {avg_val_accuracy:.4f}")
print(f"Validation Loss: {avg_val_loss:.4f}")
print("-" * 50)
if avg_f1_score > best_f1_score:
best_f1_score = avg_f1_score
best_units = units
best_dropout = dropout
best_model = model
duration = time.time() - start_time
print(f"Grid search completed in {duration:.2f} seconds")
print(f"Best model found with {best_units} units and {best_dropout:.2f} dropout rate, F1 Score: {best_f1_score}")
Training model with 64 units and 0.10 dropout rate Validation F1 Score: 0.9718 Validation Accuracy: 0.9757 Validation Loss: 0.0939 -------------------------------------------------- Training model with 64 units and 0.20 dropout rate Validation F1 Score: 0.9737 Validation Accuracy: 0.9757 Validation Loss: 0.0948 -------------------------------------------------- Training model with 64 units and 0.30 dropout rate Validation F1 Score: 0.9731 Validation Accuracy: 0.9744 Validation Loss: 0.0917 -------------------------------------------------- Training model with 64 units and 0.40 dropout rate Validation F1 Score: 0.9676 Validation Accuracy: 0.9665 Validation Loss: 0.1156 -------------------------------------------------- Training model with 64 units and 0.50 dropout rate Validation F1 Score: 0.9693 Validation Accuracy: 0.9711 Validation Loss: 0.1041 -------------------------------------------------- Training model with 128 units and 0.10 dropout rate Validation F1 Score: 0.9738 Validation Accuracy: 0.9744 Validation Loss: 0.0906 -------------------------------------------------- Training model with 128 units and 0.20 dropout rate Validation F1 Score: 0.9735 Validation Accuracy: 0.9757 Validation Loss: 0.0817 -------------------------------------------------- Training model with 128 units and 0.30 dropout rate Validation F1 Score: 0.9755 Validation Accuracy: 0.9803 Validation Loss: 0.0746 -------------------------------------------------- Training model with 128 units and 0.40 dropout rate Validation F1 Score: 0.9731 Validation Accuracy: 0.9744 Validation Loss: 0.0905 -------------------------------------------------- Training model with 128 units and 0.50 dropout rate Validation F1 Score: 0.9717 Validation Accuracy: 0.9711 Validation Loss: 0.0934 -------------------------------------------------- Training model with 192 units and 0.10 dropout rate Validation F1 Score: 0.9742 Validation Accuracy: 0.9751 Validation Loss: 0.0817 -------------------------------------------------- Training model with 192 units and 0.20 dropout rate Validation F1 Score: 0.9732 Validation Accuracy: 0.9777 Validation Loss: 0.0744 -------------------------------------------------- Training model with 192 units and 0.30 dropout rate Validation F1 Score: 0.9795 Validation Accuracy: 0.9823 Validation Loss: 0.0772 -------------------------------------------------- Training model with 192 units and 0.40 dropout rate Validation F1 Score: 0.9765 Validation Accuracy: 0.9764 Validation Loss: 0.0776 -------------------------------------------------- Training model with 192 units and 0.50 dropout rate Validation F1 Score: 0.9748 Validation Accuracy: 0.9764 Validation Loss: 0.0846 -------------------------------------------------- Training model with 256 units and 0.10 dropout rate Validation F1 Score: 0.9785 Validation Accuracy: 0.9823 Validation Loss: 0.0715 -------------------------------------------------- Training model with 256 units and 0.20 dropout rate Validation F1 Score: 0.9768 Validation Accuracy: 0.9810 Validation Loss: 0.0764 -------------------------------------------------- Training model with 256 units and 0.30 dropout rate Validation F1 Score: 0.9742 Validation Accuracy: 0.9790 Validation Loss: 0.0731 -------------------------------------------------- Training model with 256 units and 0.40 dropout rate Validation F1 Score: 0.9748 Validation Accuracy: 0.9764 Validation Loss: 0.0760 -------------------------------------------------- Training model with 256 units and 0.50 dropout rate Validation F1 Score: 0.9751 Validation Accuracy: 0.9777 Validation Loss: 0.0769 -------------------------------------------------- Training model with 320 units and 0.10 dropout rate Validation F1 Score: 0.9759 Validation Accuracy: 0.9797 Validation Loss: 0.0751 -------------------------------------------------- Training model with 320 units and 0.20 dropout rate Validation F1 Score: 0.9759 Validation Accuracy: 0.9803 Validation Loss: 0.0714 -------------------------------------------------- Training model with 320 units and 0.30 dropout rate Validation F1 Score: 0.9748 Validation Accuracy: 0.9764 Validation Loss: 0.0750 -------------------------------------------------- Training model with 320 units and 0.40 dropout rate Validation F1 Score: 0.9745 Validation Accuracy: 0.9764 Validation Loss: 0.0766 -------------------------------------------------- Training model with 320 units and 0.50 dropout rate Validation F1 Score: 0.9802 Validation Accuracy: 0.9843 Validation Loss: 0.0716 -------------------------------------------------- Training model with 384 units and 0.10 dropout rate Validation F1 Score: 0.9772 Validation Accuracy: 0.9797 Validation Loss: 0.0754 -------------------------------------------------- Training model with 384 units and 0.20 dropout rate Validation F1 Score: 0.9732 Validation Accuracy: 0.9783 Validation Loss: 0.0755 -------------------------------------------------- Training model with 384 units and 0.30 dropout rate Validation F1 Score: 0.9765 Validation Accuracy: 0.9816 Validation Loss: 0.0704 -------------------------------------------------- Training model with 384 units and 0.40 dropout rate Validation F1 Score: 0.9742 Validation Accuracy: 0.9757 Validation Loss: 0.0745 -------------------------------------------------- Training model with 384 units and 0.50 dropout rate Validation F1 Score: 0.9802 Validation Accuracy: 0.9856 Validation Loss: 0.0617 -------------------------------------------------- Training model with 448 units and 0.10 dropout rate Validation F1 Score: 0.9746 Validation Accuracy: 0.9797 Validation Loss: 0.0732 -------------------------------------------------- Training model with 448 units and 0.20 dropout rate Validation F1 Score: 0.9779 Validation Accuracy: 0.9823 Validation Loss: 0.0683 -------------------------------------------------- Training model with 448 units and 0.30 dropout rate Validation F1 Score: 0.9769 Validation Accuracy: 0.9803 Validation Loss: 0.0710 -------------------------------------------------- Training model with 448 units and 0.40 dropout rate Validation F1 Score: 0.9769 Validation Accuracy: 0.9803 Validation Loss: 0.0721 -------------------------------------------------- Training model with 448 units and 0.50 dropout rate Validation F1 Score: 0.9762 Validation Accuracy: 0.9816 Validation Loss: 0.0704 -------------------------------------------------- Training model with 512 units and 0.10 dropout rate Validation F1 Score: 0.9772 Validation Accuracy: 0.9803 Validation Loss: 0.0681 -------------------------------------------------- Training model with 512 units and 0.20 dropout rate Validation F1 Score: 0.9786 Validation Accuracy: 0.9810 Validation Loss: 0.0676 -------------------------------------------------- Training model with 512 units and 0.30 dropout rate Validation F1 Score: 0.9772 Validation Accuracy: 0.9783 Validation Loss: 0.0724 -------------------------------------------------- Training model with 512 units and 0.40 dropout rate Validation F1 Score: 0.9802 Validation Accuracy: 0.9829 Validation Loss: 0.0674 -------------------------------------------------- Training model with 512 units and 0.50 dropout rate Validation F1 Score: 0.9798 Validation Accuracy: 0.9823 Validation Loss: 0.0667 -------------------------------------------------- Grid search completed in 525.06 seconds Best model found with 384 units and 0.50 dropout rate, F1 Score: 0.9801856722129967
In the above cell, we employed a methodical approach to determine the optimal model configuration using grid search and Stratified K-Fold cross-validation. Here’s a breakdown of our process:
Hyperparameter Range:
- Hidden Layer Neurons: Configurations ranged from 64 to 512 neurons.
- Dropout Rate: Experimented with rates from 0.1 to 0.5 to mitigate overfitting.
Validation Strategy:
- Stratified K-Fold cross-validation with four splits was used to enhance the robustness and generalizability of our model.
Metric for Model Selection:
- The F1-score was pivotal in selecting the best model, balancing precision and recall effectively, which is crucial for handling potentially imbalanced class distributions.
Post completion of grid search, the best model configuration was found to be 384 units in hidden layer with 0.5 dropout rate, achieving an F1 Score of 0.9802.
results_df = pd.DataFrame(results, columns=['units', 'dropout', 'val_accuracy', 'val_loss','val_f1_score'])
# Reshape data for contour plotting
units_unique = np.unique(results_df['units'])
dropout_unique = np.unique(results_df['dropout'])
units_grid, dropout_grid = np.meshgrid(units_unique, dropout_unique)
val_f1_score_grid = results_df.pivot(index='dropout', columns='units', values='val_f1_score').values
val_accuracy_grid = results_df.pivot(index='dropout', columns='units', values='val_accuracy').values
val_loss_grid = results_df.pivot(index='dropout', columns='units', values='val_loss').values
plt.figure(figsize=(16,10))
# Plot Contour for Validation F1-Score/Accuracy/Loss with Path
# Validation F1-Score Plot
ax1 = plt.subplot2grid((2, 2), (0, 0), rowspan=2)
contour_f1 = ax1.contourf(units_grid, dropout_grid, val_f1_score_grid, cmap='viridis', alpha=0.7)
plt.colorbar(contour_f1, ax=ax1, label='Validation F1 Score')
ax1.plot(results_df['units'], results_df['dropout'], 'ko-', label='Path Taken')
ax1.scatter([best_units], [best_dropout], color='red', marker='x', s=100, label='Best Configuration')
ax1.set_title('Validation F1-Score Contour with Path')
ax1.set_xlabel('Units')
ax1.set_ylabel('Dropout Rate')
ax1.set_xlim(40, 540)
ax1.set_ylim(0.05, 0.55)
ax1.legend(loc='upper left', fontsize='small', frameon=False)
# Validation Accuracy Plot
ax2 = plt.subplot2grid((2, 2), (0, 1))
contour_acc = ax2.contourf(units_grid, dropout_grid, val_accuracy_grid, cmap='PuBu', alpha=0.7)
plt.colorbar(contour_acc, ax=ax2, label='Validation Accuracy')
ax2.plot(results_df['units'], results_df['dropout'], 'ko-', label='Path Taken')
ax2.scatter([best_units], [best_dropout], color='red', marker='x', s=100, label='Best Configuration')
ax2.set_title('Validation Accuracy Contour with Path')
ax2.set_xlabel('Units')
ax2.set_ylabel('Dropout Rate')
ax2.set_xlim(40, 540)
ax2.set_ylim(0.05, 0.55)
# Validation Loss Plot
ax3 = plt.subplot2grid((2, 2), (1, 1))
contour_loss = ax3.contourf(units_grid, dropout_grid, val_loss_grid, cmap='plasma', alpha=0.7)
plt.colorbar(contour_loss, ax=ax3, label='Validation Loss')
ax3.plot(results_df['units'], results_df['dropout'], 'ko-', label='Path Taken')
ax3.scatter([best_units], [best_dropout], color='red', marker='x', s=100, label='Best Configuration')
ax3.set_title('Validation Loss Contour with Path')
ax3.set_xlabel('Units')
ax3.set_ylabel('Dropout Rate')
ax3.set_xlim(40, 540)
ax3.set_ylim(0.05, 0.55)
plt.tight_layout()
plt.show()
Final Model Training¶
Now, we will train the final model using the best hyperparameters obtained from our extensive testing. Here’s what we're doing:
- Model Setup: We initialize our model with the optimal number of units and dropout rate to ensure effective learning.
- Monitoring and Saving: We employ early stopping mechanisms based on validation accuracy and loss to prevent overfitting. The best performing model during training is automatically saved.
- Full Dataset Training: The model is trained on the entire dataset, using the chosen parameters to fine-tune its ability to accurately predict on new data.
- Time Efficiency: We monitor the training duration to optimize performance and ensure efficient use of computational resources.
# Create the model with the best hyperparameters
final_model = create_model(units=best_units, dropout_rate=best_dropout)
# Set up callbacks for early stopping
checkpoint = ModelCheckpoint(
filepath="trained_models/best_model.h5",
monitor="val_accuracy",
save_best_only=True,
verbose=1
)
early_stop_acc = EarlyStopping(monitor="val_accuracy", patience=5, restore_best_weights=True, verbose=1)
early_stop_loss = EarlyStopping(monitor="val_loss", patience=5, restore_best_weights=True, verbose=1)
# Train the final model on the full training set
start_time = time.time()
history = final_model.fit(
X_train, y_train,
validation_data=(X_test, y_test),
epochs=30, # You can adjust the number of epochs as needed
batch_size=batch_size,
callbacks=[checkpoint,early_stop_loss,early_stop_acc],
verbose=1
)
duration = time.time() - start_time
print(f"Final model training completed in {duration:.2f} seconds")
Epoch 1/30 10/12 [========================>.....] - ETA: 0s - loss: 1.6031 - accuracy: 0.5570 - precision: 0.7264 - recall: 0.4336 Epoch 1: val_accuracy improved from -inf to 0.94488, saving model to trained_models/best_model.h5 12/12 [==============================] - 1s 40ms/step - loss: 1.4330 - accuracy: 0.6010 - precision: 0.7682 - recall: 0.4849 - val_loss: 0.2077 - val_accuracy: 0.9449 - val_precision: 0.9914 - val_recall: 0.9081 Epoch 2/30 10/12 [========================>.....] - ETA: 0s - loss: 0.2760 - accuracy: 0.9250 - precision: 0.9632 - recall: 0.8805 Epoch 2: val_accuracy improved from 0.94488 to 0.97375, saving model to trained_models/best_model.h5 12/12 [==============================] - 0s 13ms/step - loss: 0.2655 - accuracy: 0.9272 - precision: 0.9609 - recall: 0.8865 - val_loss: 0.1038 - val_accuracy: 0.9738 - val_precision: 0.9813 - val_recall: 0.9633 Epoch 3/30 10/12 [========================>.....] - ETA: 0s - loss: 0.1652 - accuracy: 0.9484 - precision: 0.9706 - recall: 0.9297 Epoch 3: val_accuracy improved from 0.97375 to 0.97638, saving model to trained_models/best_model.h5 12/12 [==============================] - 0s 13ms/step - loss: 0.1570 - accuracy: 0.9501 - precision: 0.9720 - recall: 0.9331 - val_loss: 0.0783 - val_accuracy: 0.9764 - val_precision: 0.9866 - val_recall: 0.9685 Epoch 4/30 10/12 [========================>.....] - ETA: 0s - loss: 0.1097 - accuracy: 0.9633 - precision: 0.9783 - recall: 0.9508 Epoch 4: val_accuracy did not improve from 0.97638 12/12 [==============================] - 0s 11ms/step - loss: 0.1078 - accuracy: 0.9646 - precision: 0.9797 - recall: 0.9521 - val_loss: 0.0654 - val_accuracy: 0.9764 - val_precision: 0.9867 - val_recall: 0.9711 Epoch 5/30 9/12 [=====================>........] - ETA: 0s - loss: 0.0934 - accuracy: 0.9705 - precision: 0.9813 - recall: 0.9566 Epoch 5: val_accuracy improved from 0.97638 to 0.97900, saving model to trained_models/best_model.h5 12/12 [==============================] - 0s 13ms/step - loss: 0.0931 - accuracy: 0.9718 - precision: 0.9825 - recall: 0.9593 - val_loss: 0.0565 - val_accuracy: 0.9790 - val_precision: 0.9946 - val_recall: 0.9711 Epoch 6/30 9/12 [=====================>........] - ETA: 0s - loss: 0.0757 - accuracy: 0.9792 - precision: 0.9885 - recall: 0.9696 Epoch 6: val_accuracy improved from 0.97900 to 0.98425, saving model to trained_models/best_model.h5 12/12 [==============================] - 0s 16ms/step - loss: 0.0760 - accuracy: 0.9783 - precision: 0.9873 - recall: 0.9711 - val_loss: 0.0529 - val_accuracy: 0.9843 - val_precision: 0.9920 - val_recall: 0.9738 Epoch 7/30 9/12 [=====================>........] - ETA: 0s - loss: 0.0727 - accuracy: 0.9800 - precision: 0.9851 - recall: 0.9740 Epoch 7: val_accuracy did not improve from 0.98425 12/12 [==============================] - 0s 12ms/step - loss: 0.0772 - accuracy: 0.9777 - precision: 0.9834 - recall: 0.9698 - val_loss: 0.0482 - val_accuracy: 0.9843 - val_precision: 0.9920 - val_recall: 0.9790 Epoch 8/30 9/12 [=====================>........] - ETA: 0s - loss: 0.0555 - accuracy: 0.9852 - precision: 0.9921 - recall: 0.9774 Epoch 8: val_accuracy improved from 0.98425 to 0.98950, saving model to trained_models/best_model.h5 12/12 [==============================] - 0s 13ms/step - loss: 0.0570 - accuracy: 0.9849 - precision: 0.9920 - recall: 0.9764 - val_loss: 0.0481 - val_accuracy: 0.9895 - val_precision: 0.9920 - val_recall: 0.9816 Epoch 9/30 10/12 [========================>.....] - ETA: 0s - loss: 0.0568 - accuracy: 0.9844 - precision: 0.9921 - recall: 0.9781 Epoch 9: val_accuracy did not improve from 0.98950 12/12 [==============================] - 0s 15ms/step - loss: 0.0568 - accuracy: 0.9843 - precision: 0.9920 - recall: 0.9790 - val_loss: 0.0416 - val_accuracy: 0.9869 - val_precision: 0.9920 - val_recall: 0.9816 Epoch 10/30 9/12 [=====================>........] - ETA: 0s - loss: 0.0479 - accuracy: 0.9870 - precision: 0.9930 - recall: 0.9826 Epoch 10: val_accuracy did not improve from 0.98950 12/12 [==============================] - 0s 11ms/step - loss: 0.0470 - accuracy: 0.9856 - precision: 0.9914 - recall: 0.9810 - val_loss: 0.0490 - val_accuracy: 0.9869 - val_precision: 0.9894 - val_recall: 0.9816 Epoch 11/30 9/12 [=====================>........] - ETA: 0s - loss: 0.0378 - accuracy: 0.9931 - precision: 0.9956 - recall: 0.9896 Epoch 11: val_accuracy did not improve from 0.98950 12/12 [==============================] - 0s 12ms/step - loss: 0.0424 - accuracy: 0.9915 - precision: 0.9947 - recall: 0.9869 - val_loss: 0.0425 - val_accuracy: 0.9895 - val_precision: 0.9921 - val_recall: 0.9869 Epoch 12/30 9/12 [=====================>........] - ETA: 0s - loss: 0.0443 - accuracy: 0.9870 - precision: 0.9956 - recall: 0.9844 Epoch 12: val_accuracy did not improve from 0.98950 12/12 [==============================] - 0s 12ms/step - loss: 0.0395 - accuracy: 0.9895 - precision: 0.9960 - recall: 0.9869 - val_loss: 0.0452 - val_accuracy: 0.9895 - val_precision: 0.9921 - val_recall: 0.9895 Epoch 13/30 8/12 [===================>..........] - ETA: 0s - loss: 0.0335 - accuracy: 0.9912 - precision: 0.9951 - recall: 0.9854 Epoch 13: val_accuracy improved from 0.98950 to 0.99213, saving model to trained_models/best_model.h5 12/12 [==============================] - 0s 15ms/step - loss: 0.0390 - accuracy: 0.9895 - precision: 0.9921 - recall: 0.9849 - val_loss: 0.0352 - val_accuracy: 0.9921 - val_precision: 0.9921 - val_recall: 0.9895 Epoch 14/30 9/12 [=====================>........] - ETA: 0s - loss: 0.0312 - accuracy: 0.9939 - precision: 0.9956 - recall: 0.9905 Epoch 14: val_accuracy improved from 0.99213 to 0.99475, saving model to trained_models/best_model.h5 12/12 [==============================] - 0s 12ms/step - loss: 0.0324 - accuracy: 0.9934 - precision: 0.9954 - recall: 0.9908 - val_loss: 0.0385 - val_accuracy: 0.9948 - val_precision: 0.9947 - val_recall: 0.9921 Epoch 15/30 10/12 [========================>.....] - ETA: 0s - loss: 0.0314 - accuracy: 0.9906 - precision: 0.9937 - recall: 0.9883 Epoch 15: val_accuracy did not improve from 0.99475 12/12 [==============================] - 0s 10ms/step - loss: 0.0307 - accuracy: 0.9908 - precision: 0.9941 - recall: 0.9888 - val_loss: 0.0344 - val_accuracy: 0.9921 - val_precision: 0.9921 - val_recall: 0.9921 Epoch 16/30 10/12 [========================>.....] - ETA: 0s - loss: 0.0241 - accuracy: 0.9953 - precision: 0.9976 - recall: 0.9930 Epoch 16: val_accuracy did not improve from 0.99475 12/12 [==============================] - 0s 11ms/step - loss: 0.0237 - accuracy: 0.9948 - precision: 0.9974 - recall: 0.9928 - val_loss: 0.0334 - val_accuracy: 0.9948 - val_precision: 0.9947 - val_recall: 0.9921 Epoch 17/30 10/12 [========================>.....] - ETA: 0s - loss: 0.0205 - accuracy: 0.9969 - precision: 0.9984 - recall: 0.9930 Epoch 17: val_accuracy did not improve from 0.99475 12/12 [==============================] - 0s 11ms/step - loss: 0.0211 - accuracy: 0.9967 - precision: 0.9987 - recall: 0.9928 - val_loss: 0.0345 - val_accuracy: 0.9921 - val_precision: 0.9921 - val_recall: 0.9921 Epoch 18/30 8/12 [===================>..........] - ETA: 0s - loss: 0.0230 - accuracy: 0.9961 - precision: 0.9971 - recall: 0.9932 Epoch 18: val_accuracy did not improve from 0.99475 12/12 [==============================] - 0s 13ms/step - loss: 0.0218 - accuracy: 0.9954 - precision: 0.9961 - recall: 0.9934 - val_loss: 0.0401 - val_accuracy: 0.9921 - val_precision: 0.9921 - val_recall: 0.9921 Epoch 19/30 10/12 [========================>.....] - ETA: 0s - loss: 0.0211 - accuracy: 0.9977 - precision: 0.9976 - recall: 0.9945 Epoch 19: val_accuracy did not improve from 0.99475 Restoring model weights from the end of the best epoch: 14. 12/12 [==============================] - 0s 13ms/step - loss: 0.0213 - accuracy: 0.9980 - precision: 0.9980 - recall: 0.9941 - val_loss: 0.0336 - val_accuracy: 0.9921 - val_precision: 0.9947 - val_recall: 0.9921 Epoch 19: early stopping Final model training completed in 4.15 seconds
Now, let's visually assess the performance of our final model across epochs.
# Set the style of Seaborn to 'dark'
sns.set(style="darkgrid", context='talk', palette='muted')
# Create a figure with a specified size
plt.figure(figsize=(12, 6))
# Convert the history data to a DataFrame for easier plotting
history_df = pd.DataFrame(history.history)
history_df['Epoch'] = history_df.index + 1
# Plot Training and Validation Accuracy
plt.subplot(1, 2, 1)
sns.lineplot(data=history_df, x='Epoch', y='accuracy', label='Training Accuracy', linewidth=2.5)
sns.lineplot(data=history_df, x='Epoch', y='val_accuracy', label='Validation Accuracy', linewidth=2.5)
plt.title('Training and Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
# Plot Training and Validation Loss
plt.subplot(1, 2, 2)
sns.lineplot(data=history_df, x='Epoch', y='loss', label='Training Loss', linewidth=2.5)
sns.lineplot(data=history_df, x='Epoch', y='val_loss', label='Validation Loss', linewidth=2.5)
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
# Remove top and right borders
sns.despine()
plt.tight_layout()
plt.show()
Accuracy Trends: The accuracy plot indicates a swift increase in both training and validation accuracy during the initial epochs, demonstrating that the model quickly learns to classify the denominations effectively. The convergence of training and validation accuracy suggests a good generalization to unseen data.
Loss Trends: The loss plot shows a steep decrease in both training and validation loss, stabilizing after a few epochs. This trend confirms that the model's predictions are becoming increasingly accurate as training progresses, and it effectively minimizes errors over time.
Model Evaluation¶
In this section, we evaluate the out-of-sample performance of our trained model. By making predictions on the test set and comparing them to the true labels, we can measure the accuracy of our model across different Euro denominations.
# Make predictions using final_model on test_set
predictions = final_model.predict(X_test)
predicted_labels = np.argmax(predictions, axis=1)
# Assuming y_test is also in one-hot encoded format
true_labels = np.argmax(y_test, axis=1)
# Generate classification report
report = classification_report(true_labels, predicted_labels, target_names=y_test.columns.tolist())
print("Classification Report:")
print(report)
12/12 [==============================] - 0s 2ms/step
Classification Report:
precision recall f1-score support
100_1 1.00 1.00 1.00 58
100_2 1.00 1.00 1.00 76
10_1 1.00 1.00 1.00 20
10_2 1.00 1.00 1.00 20
200_1 1.00 1.00 1.00 41
200_2 1.00 1.00 1.00 39
20_1 1.00 1.00 1.00 20
20_2 1.00 1.00 1.00 20
50_1 0.92 1.00 0.96 24
50_2 1.00 0.91 0.95 23
5_1 1.00 1.00 1.00 20
5_2 1.00 1.00 1.00 20
accuracy 0.99 381
macro avg 0.99 0.99 0.99 381
weighted avg 1.00 0.99 0.99 381
From the classification report shown above, it's evident that our model performs exceptionally well across various Euro denominations:
Precision and Recall: The model consistently achieves a precision and recall of 1.00 for most classes, indicating a high degree of accuracy in correctly identifying the denominations.
F1-Score: The F1-scores, which balance precision and recall, are near perfect for almost all denominations, highlighting the model's efficiency in accurate classification.
Support: The support values indicate the number of samples for each denomination class used in testing, providing insight into the dataset's distribution and the model's exposure during training.
Overall Accuracy: Achieving an overall accuracy of 0.99, the model demonstrates its capability to generalize well to new, unseen data, making it highly reliable for practical applications in currency denomination recognition.
We will next examine the confusion matrix to better understand the specific performance challenges with the 50_1 and 50_2 classes, where the precision and recall are not nearly perfect, to identify possible areas for model improvement.
# Generate the confusion matrix
cm = confusion_matrix(true_labels, predicted_labels)
# Plotting the confusion matrix
plt.figure(figsize=(10, 8))
sns.set(style="whitegrid", palette="muted")
ax = sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=list(y_train.columns), yticklabels=list(y_train.columns))
plt.title('Confusion Matrix')
plt.xlabel('Predicted Labels')
plt.ylabel('True Labels')
plt.show()
According to the confusion matrix, the model exhibits some difficulty in distinguishing between the front and back faces of the 50 Euro denomination, with a few instances where one face is mistaken for the other. Despite this, the model successfully identifies the denomination itself in all cases. This suggests that while the model may need enhancements in recognizing note orientations, its proficiency in determining the correct denomination is effectively maintained, offering practical assistance in currency recognition for visually impaired users.