This article aims to demonstratre how DNN can be used for wall-following Robot Navigation. The dataset is available on UCI Machine Learning repository.
Features are taken from 24 sensors readings and labels are Slight-Right-Turn, Move-Forward, Sharp-Right-Turn and Slight-Left-Turn.
We will try to use Tensorflow libraries as much as possible.
First, we download the dataset from the repository.
import tensorflow as tf
URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/00194/sensor_readings_24.data"
dataset = tf.keras.utils.get_file("sensor_readings_24.data",URL)
Then, we define the function that will be used to create the tuple (features, labels):
CSV_COLUMNS = ['US1','US2','US3','US4','US5','US6','US7','US8','US9','US10','US11','US12','US13','US14','US15','US16','US17','US18','US19','US20','US21','US22','US23','US24', 'Class']
def train_input_fn():
direction = tf.data.experimental.make_csv_dataset(
dataset,
batch_size=32,
header=True,
column_names=CSV_COLUMNS,
label_name="Class")
direction_batches = (
direction.cache().repeat().prefetch(tf.data.AUTOTUNE))
return direction_batches
Next step is to create the feature columns:
def create_feature_columns():
"""Creates feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
feature_columns = {
colname : tf.feature_column.numeric_column(key=colname)
for colname in ['US1','US2','US3','US4','US5','US6','US7','US8','US9','US10','US11','US12','US13','US14','US15','US16','US17','US18','US19','US20','US21','US22','US23','US24']
}
return feature_columns
fc = create_feature_columns()
for key,val in fc.items():
exec(key + '=val')
Then, we build the model using DNNClassifier:
import tempfile
from tensorflow import keras
model_dir = tempfile.mkdtemp()
model = tf.estimator.DNNClassifier(
model_dir=model_dir,
hidden_units=[128,80,50,30,10],
feature_columns=[US1,US2,US3,US4,US5,US6,US7,US8,US9,US10,US11,US12,US13,US14,US15,US16,US17,US18,US19,US20,US21,US22,US23,US24],
n_classes=4,
label_vocabulary=['Move-Forward','Slight-Right-Turn','Sharp-Right-Turn', 'Slight-Left-Turn'],
activation_fn='relu',
optimizer=keras.optimizers.Adam(learning_rate=0.005)
)
Now, time to proceed to the training and evaluate:
model = model.train(input_fn=train_input_fn, steps=12000)
result = model.evaluate(train_input_fn, steps=10)
for key, value in result.items():
print(key, ":", value)
We can observe an accuracy of 95%.
Jupyter Notebook file is available on Github.
We will do another approach using Elman Network.