The recognition of human activity is the problem of classifying sequences of accelerometer data recorded by specialized cabling or smart phones in well-defined known movements.
The classic approaches to the problem concern the characteristics of hand processing from time series data based on fixed-size windows and machine learning models, as sets of decision trees. The difficulty is that this feature engineering requires strong industry experience.
Recently, profound learning methods such as recurrent neural networks such as LSTM and variations that make use of one-dimensional convolutional neural networks or CNN have been shown to provide results at the forefront of recognition of challenging activities with minimal or no feature engineering data, instead to use the learning of features on raw data.
In this tutorial, you will discover three architectures of recurrent neural networks for modeling a problem of classifying time series of activity recognition.
After completing this tutorial, you will know:
- How to develop a recurrent short-term memory neural network for the recognition of human activity.
- How to develop an LSTM model, or CNN-LSTM, one-dimensional convolutional Neural Network.
- How to develop a one-dimensional Convolutional LSTM, or ConvLSTM, model for the same problem.
Let's begin.

How to develop the RNN models for the classification of time series of recognition of human activities
Photo by Bonnie Moreland, some rights reserved.
Overview of the exercise
This tutorial is divided into four parts; they are:
- Acknowledgment of activities through smartphone data sets
- Develop an LSTM network model
- Develop a CNN-LSTM network model
- Develop a ConvLSTM network model
Acknowledgment of activities through smartphone data sets
Recognizing human activities, or HAR in short, is the problem of predicting what a person is doing based on a trace of their movement using sensors.
A standard set of human activity recognition data is the "Smart Phone Recognition" set available in 2012.
It was prepared and made available by Davide Anguita, et al. from the University of Genoa, in Italy, it is described in full in the 2013 document "A set of data in the public domain for the recognition of human activities via smartphone". The data set was modeled with machine learning algorithms in the 2012 document entitled "Recognition of human activities on smartphones using a multicast and carrier-compatible vectorial machine."
The dataset has been made available and can be downloaded for free from the UCI Machine Learning Repository:
Data were collected from 30 subjects aged between 19 and 48 by performing one of the six standard activities while wearing a smartphone mounted on the belt that recorded movement data. The video of each subject performing the activity has been recorded and the movement data has been manually labeled by these videos.
Below is an example of a video of a person performing the activities while their movement data is recorded.
The six activities carried out were as follows:
- Walking
- Walking upstairs
- Walking downstairs
- Sitting
- Standing
- laying
The recorded motion data were the accelerometer x, yez (linear acceleration) and gyroscopic (angular velocity) data from the smartphone, in particular a Samsung Galaxy S II. The observations were recorded at 50 Hz (ie 50 data points per second). Each subject performed the task sequence twice; once with the device on the left side and once with the device on the right side.
Raw data are not available. Instead, a pre-processed version of the data set was made available. The pre-processing steps included:
- Pre-processing accelerometer and gyroscope with anti-noise filters.
- Subdivision of data in fixed windows of 2.56 seconds (128 data points) with 50% overlap. Breakdown of accelerometer data into gravitational (total) components and body movement.
Function analysis was applied to the window data and a copy of the data with these engineered features was made available.
A number of time and frequency functions commonly used in the field of human activity recognition have been extracted from each window. The result was a vector of elements of 561 elements.
The data set was divided into train sets (70%) and tests (30%) based on data for the subjects, e.g. 21 subjects for the train and nine for the test.
The results of the experiment with a vector support machine intended for use on a smartphone (eg fixed-point arithmetic) yielded a predictive accuracy of 89% on the test data set, yielding results similar to those of an unmodified SVM implementation.
The data set is available for free and can be downloaded from the UCI Machine Learning repository.
The data is provided as a single zip file with a size of approximately 58 megabytes. The direct link for this download is below:
Download the data set and unzip all the files in a new directory in the current working directory called "HARDataset".
Need help with Deep Learning for Time Series?
Get my free 7-day email course now (with sample code).
Click to register and get a free PDF Ebook version of the course.
Download your FREE mini-course
Develop an LSTM network model
In this section, we will develop a short-term memory network model (LSTM) for the human activity recognition data set.
LSTM network models are a type of recurrent neural network capable of learning and remembering long sequences of input data. They are designed for use with data that includes long data sequences, up to 200-400 time intervals. They can be a good measure for this problem.
The model can support multiple parallel sequences of input data, like each accelerometer and gyroscope data axis. The model learns to extract features from observation sequences and how to map internal features to different types of activities.
The advantage of using LSTM for sequence classification is that they can learn directly from raw time series data and, in turn, do not require domain expertise for manual engineering of input functionality. The model can learn an internal representation of time series data and ideally achieve performance comparable to models suitable for a data set version with engineered characteristics.
This section is divided into four parts; they are:
- Loading data
- Adapt and evaluate the model
- Summarize the results
- Complete example
Loading data
The first step is to load the raw data set into memory.
There are three types of main signals in the raw data: total acceleration, body acceleration and body gyroscope. Each has 3 axes of data. This means that there are a total of nine variables for each time step.
Furthermore, each data series has been divided into overlapping windows of 2.65 seconds of data or 128 time intervals. These data windows correspond to the engineered characteristics windows (rows) in the previous section.
This means that one row of data has (128 * 9) or 1,152 elements. This is a little less than twice the size of the 561 vector elements in the previous section and it is likely that there are some redundant data.
The signals are stored in /Inertial signals/ directory under the train and test the subdirectories. Each axis of each signal is stored in a separate file, which means that each of the train and test data sets has nine input files to be loaded and an output file to be loaded. We can batch upload these files into groups, given directory structures and consistent file naming conventions.
The input data is in CSV format where the columns are separated by white space. Each of these files can be loaded as a NumPy array. The LOAD_FILE () the function loads a data set given the file fill path and returns the data loaded as a NumPy array.
# upload a single file as a numpy array
def load_file (filepath):
dataframe = read_csv (filepath, header = None, delim_whitespace = True)
return dataframe.values
# upload a single file as a numpy array DEF LOAD_FILE(file path): data frame = read_csv(file path, heading=None, delim_whitespace=True) return data frame.values |
We can then upload all data for a particular group (train or test) into a single three-dimensional NumPy array, where the dimensions of the array are[[[[samples, time steps, characteristics].
To make this clearer, there are 128 time steps and nine functions, in which the number of samples is the number of rows in a given raw signal data file.
The load_group () the following function implements this behavior. The NumPy function of dstack () allows us to stack each of the loaded 3D arrays into a single 3D array in which the variables are separated on the third dimension (feature).
# upload a list of files in a 3D array of [samples, timesteps, features]
def load_group (file name, prefix = & # 39; & # 39;):
loaded = list ()
by name in file names:
data = load_file (prefix + name)
loaded.append (data)
# stack group so that the features are the 3rd dimension
loaded = dstack (loaded)
return loaded
# upload a list of files in a 3D array of [samples, timesteps, features] DEF load_group(file names, prefix=& # 39; & # 39;): loaded = list() for first name in file names: data = LOAD_FILE(prefix + first name) loaded.to add(data) # stack group so that the features are the 3rd dimension loaded = dstack(loaded) return loaded |
We can use this function to load all input signal data for a given group, such as train or test.
The load_dataset_group () the function under load all input signal data and output data for a single group using consistent naming conventions between directories.
# upload a group of data sets, like a train or a test
def load_dataset_group (group, prefix = & # 39; & # 39;):
filepath = prefix + group + & # 39; / Inertial signals / & # 39;
# loads all 9 files as a single array
filenames = list ()
# total acceleration
file names + = [‘total_acc_x_’+group+’.txt’, ‘total_acc_y_’+group+’.txt’, ‘total_acc_z_’+group+’.txt’]
# acceleration of the body
file names + = [‘body_acc_x_’+group+’.txt’, ‘body_acc_y_’+group+’.txt’, ‘body_acc_z_’+group+’.txt’]
# gyroscope of the body
file names + = [‘body_gyro_x_’+group+’.txt’, ‘body_gyro_y_’+group+’.txt’, ‘body_gyro_z_’+group+’.txt’]
# upload input data
X = load_group (file names, file path)
# upload the class output
y = load_file (prefix + group + & # 39; / y _ & # 39; + group + & # 39; .txt & # 39;)
return X, y
# upload a group of data sets, like a train or a test DEF load_dataset_group(group, prefix=& # 39; & # 39;): file path = prefix + group + & # 39; / Inertial signals / & # 39; # loads all 9 files as a single array file names = list() # total acceleration file names + = [[[[& # 39; Total_acc_x _ & # 39;+group+& # 39; .testo & # 39;, & # 39; Total_acc_y _ & # 39;+group+& # 39; .testo & # 39;, & # 39; Total_acc_z _ & # 39;+group+& # 39; .testo & # 39;] # acceleration of the body file names + = [[[[& # 39; Body_acc_x _ & # 39;+group+& # 39; .testo & # 39;, & # 39; Body_acc_y _ & # 39;+group+& # 39; .testo & # 39;, & # 39; Body_acc_z _ & # 39;+group+& # 39; .testo & # 39;] # gyroscope of the body file names + = [[[[& # 39; Body_gyro_x _ & # 39;+group+& # 39; .testo & # 39;, & # 39; Body_gyro_y _ & # 39;+group+& # 39; .testo & # 39;, & # 39; Body_gyro_z _ & # 39;+group+& # 39; .testo & # 39;] # upload input data X = load_group(file names, file path) # upload the class output y = LOAD_FILE(prefix + group + & # 39; / Y _ & # 39;+group+& # 39; .testo & # 39;) return X, y |
Finally, we can load each of the trains and test the data sets.
Output data is defined as an integer for the class number. These class integers must be hot-coded so that the data is suitable for mounting a multi-class classification model of the neural network. We can do this by calling the Keras to_categorical () function.
The load_dataset () the following function implements this behavior and returns the train and tests the X and y elements ready for assembly and evaluation of the defined models.
# load the data set, return the train and test the X and y elements
def load_dataset (prefix = & # 39; & # 39;):
# loads all the train
trainX, trainy = load_dataset_group (& # 39; train & # 39 ;, prefix + & # 39; HARDataset / & # 39;)
print (trainX.shape, trainy.shape)
# charge all the test
testX, testy = load_dataset_group (& # 39; test & # 39 ;, prefix + & # 39; HARDataset / & # 39;)
printing (testX.shape, testy.shape)
# zero offset class values
trainy = trainy – 1
testy = testy – 1
# a hot coding y
trainy = to_categorical (trainy)
testy = to_categorical (testy)
print (trainX.shape, trainy.shape, testX.shape, testy.shape)
return trainX, trainy, testX, testy
# load the data set, return the train and test the X and y elements DEF load_dataset(prefix=& # 39; & # 39;): # loads all the train trainX, trainy = load_dataset_group(& # 39; train & # 39;, prefix + & # 39; HARDataset / & # 39;) print(trainX.form, trainy.form) # charge all the test testX, irritable = load_dataset_group(& # 39; test & # 39;, prefix + & # 39; HARDataset / & # 39;) print(testX.form, irritable.form) # zero offset class values trainy = trainy – 1 irritable = irritable – 1 # a hot coding y trainy = to_categorical(trainy) irritable = to_categorical(irritable) print(trainX.form, trainy.form, testX.form, irritable.form) return trainX, trainy, testX, irritable |
Adapt and evaluate the model
Now that the data is loaded into memory ready for modeling, we can define, adapt and evaluate an LSTM model.
We can define a call function evaluate_model () taking the train and testing the data set, adapting a model to the training data set, evaluating it on the test data set, and returning an estimate of the model's performance.
First, we need to define the LSTM model using Keras's deep learning library. The model requires a three-dimensional input with[[[[samples, time steps, characteristics].
This is exactly how we loaded data, where a sample is a time series data window, each window has 128 time steps and a time step has nine variables or functions.
The output for the model will be a six-element vector containing the probability of a given window belonging to each of the six types of activity.
The input and output dimensions are necessary for mounting the model and can be extracted from the supplied training data set.
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
n_timesteps, n_features, n_outputs = trainX.form[[[[1], trainX.form[[[[2], trainy.form[[[[1] |
The model is defined as a sequential Keras model, for simplicity.
We will define the model as a single hidden LSTM layer. This is followed by a dropout level designed to reduce the over-treatment of the model to the training data. Finally, a fully connected dense layer is used to interpret the features extracted from the hidden LSTM layer, before using a final output layer to make predictions.
The efficient Adam version of the stochastic gradient descent will be used to optimize the network and the categorical cross entropy loss function will be used as we are learning a multi-class classification problem.
The definition of the model is listed below.
model = Sequential ()
model.add (LSTM (100, input_shape = (n_timesteps, n_features))))
model.add (voltage drop (0.5))
model.add (Dense (100, activation = & # 39; relu & # 39;))
model.add (Dense (n_outputs, activation = & # 39; softmax & # 39;)))
model.compile (loss = & # 39; categorical_crossentropy & # 39 ;, optimizer = & # 39; adam & # 39 ;, metrics =[‘accuracy’])
template = Sequential() template.insert(LSTM(100, input_shape=(n_timesteps,n_features))) template.insert(Throw out(0.5)) template.insert(Dense(100, Activation=& # 39; Relu & # 39;)) template.insert(Dense(n_outputs, Activation=& # 39; & # 39 SoftMax;)) template.fill in(lost=& # 39; Categorical_crossentropy & # 39;, optimizer=& # 39; Adam & # 39;, metrics=[[[[& # 39; accuracy & # 39;]) |
The model is suitable for a fixed number of epochs, in this case 15, and a batch size of 64 samples will be used, in which 64 data windows will be exposed to the model before the model weights are updated.
Once the model is in shape, it is evaluated on the test data set and the accuracy of the fit pattern on the test data set is returned.
Note: it is normal not to mix sequence data when installing an LSTM. Here we mix the input data windows during training (default). In this problem, we are interested in exploiting LSTM's ability to learn and extract functionality through time steps in a window, not through Windows.
The complete evaluate_model () the function is listed below.
# measure and evaluate a model
def evaluate_model (trainX, trainy, testX, testy):
verbose, epoch, batch_size = 0, 15, 64
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
model = Sequential ()
model.add (LSTM (100, input_shape = (n_timesteps, n_features))))
model.add (voltage drop (0.5))
model.add (Dense (100, activation = & # 39; relu & # 39;))
model.add (Dense (n_outputs, activation = & # 39; softmax & # 39;)))
model.compile (loss = & # 39; categorical_crossentropy & # 39 ;, optimizer = & # 39; adam & # 39 ;, metrics =[‘accuracy’])
# suitable network
model.fit (trainX, trainy, epochs = epochs, batch_size = batch_size, verbose = verbose)
# evaluate the model
_, accurate = model.evaluate (testX, testy, batch_size = batch_size, verbose = 0)
return accuracy
# measure and evaluate a model DEF evaluate_model(trainX, trainy, testX, irritable): verbose, ages, lot size = 0, 15, 64 n_timesteps, n_features, n_outputs = trainX.form[[[[1], trainX.form[[[[2], trainy.form[[[[1] template = Sequential() template.insert(LSTM(100, input_shape=(n_timesteps,n_features))) template.insert(Throw out(0.5)) template.insert(Dense(100, Activation=& # 39; Relu & # 39;)) template.insert(Dense(n_outputs, Activation=& # 39; & # 39 SoftMax;)) template.fill in(lost=& # 39; Categorical_crossentropy & # 39;, optimizer=& # 39; Adam & # 39;, metrics=[[[[& # 39; accuracy & # 39;]) # suitable network template.in shape(trainX, trainy, ages=ages, lot size=lot size, verbose=verbose) # evaluate the model _, precision = template.to evaluate(testX, irritable, lot size=lot size, verbose=0) return precision |
There is nothing special in the network structure or chosen hyperparameters, they are just a starting point for this problem.
Summarize the results
We can not judge the model's ability from a single evaluation.
The reason for this is that neural networks are stochastic, which means that a different specific model will result when you train the same model configuration on the same data.
This is a feature of the network as it provides the model with its ability to adapt, but requires a slightly more complicated evaluation of the model.
We will repeat the evaluation of the model several times, then summarize the performance of the model through each of these executions. For example, we can call evaluate_model () a total of 10 times. This will result in a population of model evaluation scores to be summarized.
# repeat the experiment
scores = list ()
for r in the interval (repetitions):
score = evaluate_model (trainX, trainy, testX, testy)
score = score * 100.0
print (& # 39;> #% d:% .3f & # 39;% (r + 1, score))
scores.append (score)
# repeat the experiment scores = list() for r in range(repeats): Point = evaluate_model(trainX, trainy, testX, irritable) Point = Point * 100.0 print(& # 39;> #% d:% .3f & # 39; % (r+1, Point)) scores.to add(Point) |
We can summarize the sample of the scores by calculating and reporting the mean and standard deviation of the performance. The average provides the average accuracy of the model on the data set, while the standard deviation provides the average of the variance of accuracy from the average.
The function summarize_results () below summarizes the results of a race.
# summarize the scores
def summarize_results (scores):
press (scores)
m, s = average (scores), std (scores)
printing (& # 39; Accuracy:% .3f %% (+ / -%. 3f) & # 39;% (m, s))
# summarize the scores DEF summarize_results(scores): print(scores) m, S = mean(scores), std(scores) print(& # 39; Accuracy:% .3f %% (+ / -%. 3f) & # 39; % (m, S)) |
We can group the repeated evaluation, the collection of the results and the summary of the results in a main function of the experiment, called run_experiment (), listed below.
By default, the model is evaluated 10 times before the model performance is reported.
# run an experiment
def run_experiment (repetitions = 10):
# Loading data
trainX, trainy, testX, testy = load_dataset ()
# repeat the experiment
scores = list ()
for r in the interval (repetitions):
score = evaluate_model (trainX, trainy, testX, testy)
score = score * 100.0
print (& # 39;> #% d:% .3f & # 39;% (r + 1, score))
scores.append (score)
# summarize the results
summarize_results (scores)
# run an experiment DEF run_experiment(repeats=10): # Loading data trainX, trainy, testX, irritable = load_dataset() # repeat the experiment scores = list() for r in range(repeats): Point = evaluate_model(trainX, trainy, testX, irritable) Point = Point * 100.0 print(& # 39;> #% d:% .3f & # 39; % (r+1, Point)) scores.to add(Point) # summarize the results summarize_results(scores) |
Complete example
Now that we have all the pieces, we can tie them together in a working example.
The complete list of the code is provided below.
# lstm model
from the meaning of import numpy
from numpy import std
from numpy import dstack
from panda import read_csv
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers imports Dropout
from keras.layers it imports LSTM
from keras.utils import in_categorico
from matplotlib import pyplot
# upload a single file as a numpy array
def load_file (filepath):
dataframe = read_csv (filepath, header = None, delim_whitespace = True)
return dataframe.values
# loads a list of files and returns a numpy 3d array
def load_group (file name, prefix = & # 39; & # 39;):
loaded = list ()
by name in file names:
data = load_file (prefix + name)
loaded.append (data)
# stack group so that the features are the 3rd dimension
loaded = dstack (loaded)
return loaded
# upload a group of data sets, like a train or a test
def load_dataset_group (group, prefix = & # 39; & # 39;):
filepath = prefix + group + & # 39; / Inertial signals / & # 39;
# loads all 9 files as a single array
filenames = list ()
# total acceleration
file names + = [‘total_acc_x_’+group+’.txt’, ‘total_acc_y_’+group+’.txt’, ‘total_acc_z_’+group+’.txt’]
# acceleration of the body
file names + = [‘body_acc_x_’+group+’.txt’, ‘body_acc_y_’+group+’.txt’, ‘body_acc_z_’+group+’.txt’]
# gyroscope of the body
file names + = [‘body_gyro_x_’+group+’.txt’, ‘body_gyro_y_’+group+’.txt’, ‘body_gyro_z_’+group+’.txt’]
# upload input data
X = load_group (file names, file path)
# upload the class output
y = load_file (prefix + group + & # 39; / y _ & # 39; + group + & # 39; .txt & # 39;)
return X, y
# load the data set, return the train and test the X and y elements
def load_dataset (prefix = & # 39; & # 39;):
# loads all the train
trainX, trainy = load_dataset_group (& # 39; train & # 39 ;, prefix + & # 39; HARDataset / & # 39;)
print (trainX.shape, trainy.shape)
# charge all the test
testX, testy = load_dataset_group (& # 39; test & # 39 ;, prefix + & # 39; HARDataset / & # 39;)
printing (testX.shape, testy.shape)
# zero offset class values
trainy = trainy – 1
testy = testy – 1
# a hot coding y
trainy = to_categorical (trainy)
testy = to_categorical (testy)
print (trainX.shape, trainy.shape, testX.shape, testy.shape)
return trainX, trainy, testX, testy
# measure and evaluate a model
def evaluate_model (trainX, trainy, testX, testy):
verbose, epoch, batch_size = 0, 15, 64
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
model = Sequential ()
model.add (LSTM (100, input_shape = (n_timesteps, n_features))))
model.add (voltage drop (0.5))
model.add (Dense (100, activation = & # 39; relu & # 39;))
model.add (Dense (n_outputs, activation = & # 39; softmax & # 39;)))
model.compile (loss = & # 39; categorical_crossentropy & # 39 ;, optimizer = & # 39; adam & # 39 ;, metrics =[‘accuracy’])
# suitable network
model.fit (trainX, trainy, epochs = epochs, batch_size = batch_size, verbose = verbose)
# evaluate the model
_, accurate = model.evaluate (testX, testy, batch_size = batch_size, verbose = 0)
return accuracy
# summarize the scores
def summarize_results (scores):
press (scores)
m, s = average (scores), std (scores)
printing (& # 39; Accuracy:% .3f %% (+ / -%. 3f) & # 39;% (m, s))
# run an experiment
def run_experiment (repetitions = 10):
# Loading data
trainX, trainy, testX, testy = load_dataset ()
# repeat the experiment
scores = list ()
for r in the interval (repetitions):
score = evaluate_model (trainX, trainy, testX, testy)
score = score * 100.0
print (& # 39;> #% d:% .3f & # 39;% (r + 1, score))
scores.append (score)
# summarize the results
summarize_results (scores)
# run the experiment
run_experiment ()
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 ninety two 93 94 95 96 97 98 99 100 |
# lstm model from numpy import mean from numpy import std from numpy import dstack from Panda import read_csv from keras.Models import Sequential from keras.layers import Dense from keras.layers import Flatten from keras.layers import Throw out from keras.layers import LSTM from keras.utils import to_categorical from matplotlib import pyplot # upload a single file as a numpy array DEF LOAD_FILE(file path): data frame = read_csv(file path, heading=None, delim_whitespace=True) return data frame.values # loads a list of files and returns a numpy 3d array DEF load_group(file names, prefix=& # 39; & # 39;): loaded = list() for first name in file names: data = LOAD_FILE(prefix + first name) loaded.to add(data) # stack group so that the features are the 3rd dimension loaded = dstack(loaded) return loaded # upload a group of data sets, like a train or a test DEF load_dataset_group(group, prefix=& # 39; & # 39;): file path = prefix + group + & # 39; / Inertial signals / & # 39; # loads all 9 files as a single array file names = list() # total acceleration file names + = [[[[& # 39; Total_acc_x _ & # 39;+group+& # 39; .testo & # 39;, & # 39; Total_acc_y _ & # 39;+group+& # 39; .testo & # 39;, & # 39; Total_acc_z _ & # 39;+group+& # 39; .testo & # 39;] # acceleration of the body file names + = [[[[& # 39; Body_acc_x _ & # 39;+group+& # 39; .testo & # 39;, & # 39; Body_acc_y _ & # 39;+group+& # 39; .testo & # 39;, & # 39; Body_acc_z _ & # 39;+group+'.testo'] # giroscopio del corpo i nomi dei file + = [[[['Body_gyro_x_'+group+'.testo', 'Body_gyro_y_'+group+'.testo', 'Body_gyro_z_'+group+'.testo'] # carica i dati di input X = load_group(i nomi dei file, percorso del file) # carica l'output della classe y = LOAD_FILE(prefix + group + '/ Y_'+group+'.testo') return X, y # carica il set di dati, restituisce il treno e testa gli elementi X e y DEF load_dataset(prefix=& # 39; & # 39;): # carica tutto il treno trainX, trainy = load_dataset_group('treno', prefix + 'HARDataset /') print(trainX.form, trainy.form) # carica tutto il test testX, irritabile = load_dataset_group('test', prefix + 'HARDataset /') print(testX.form, irritabile.form) # valori di classe offset zero trainy = trainy – 1 irritabile = irritabile – 1 # una codifica a caldo y trainy = to_categorical(trainy) irritabile = to_categorical(irritabile) print(trainX.form, trainy.form, testX.form, irritabile.form) return trainX, trainy, testX, irritabile # misura e valuta un modello DEF evaluate_model(trainX, trainy, testX, irritabile): verboso, ages, dimensione del lotto = 0, 15, 64 n_timesteps, n_features, n_outputs = trainX.form[[[[1], trainX.form[[[[2], trainy.form[[[[1] template = Sequenziale() template.insert(LSTM(100, input_shape=(n_timesteps,n_features))) template.insert(Dropout(0.5)) template.insert(Dense(100, Attivazione='relu')) template.insert(Dense(n_outputs, Attivazione='softmax')) template.fill in(lost='categorical_crossentropy', ottimizzatore='adam', metrics=[[[['accuracy']) # fit network template.in shape(trainX, trainy, epochs=epochs, batch_size=batch_size, verboso=verboso) # evaluate model _, precision = template.to evaluate(testX, testy, batch_size=batch_size, verboso=0) return precision # summarize scores def summarize_results(scores): print(scores) m, S = mean(scores), std(scores) print('Accuracy: %.3f%% (+/-%.3f)' % (m, S)) # run an experiment def run_experiment(repeats=10): # load data trainX, trainy, testX, testy = load_dataset() # repeat experiment scores = list() for r in range(repeats): Point = evaluate_model(trainX, trainy, testX, testy) Point = score * 100.0 print('>#%d: %.3f' % (r+1, Point)) scores.to add(Point) # summarize results summarize_results(scores) # run the experiment run_experiment() |
Running the example first prints the shape of the loaded dataset, then the shape of the train and test sets and the input and output elements. This confirms the number of samples, time steps, and variables, as well as the number of classes.
Next, models are created and evaluated and a debug message is printed for each.
Finally, the sample of scores is printed, followed by the mean and standard deviation. We can see that the model performed well, achieving a classification accuracy of about 89.7% trained on the raw dataset, with a standard deviation of about 1.3.
This is a good result, considering that the original paper published a result of 89%, trained on the dataset with heavy domain-specific feature engineering, not the raw dataset.
Note: given the stochastic nature of the algorithm, your specific results may vary. If so, try running the code a few times.
(7352, 128, 9) (7352, 1)
(2947, 128, 9) (2947, 1)
(7352, 128, 9) (7352, 6) (2947, 128, 9) (2947, 6)
>#1: 90.058
>#2: 85.918
>#3: 90.974
>#4: 89.515
>#5: 90.159
>#6: 91.110
>#7: 89.718
>#8: 90.295
>#9: 89.447
>#10: 90.024
Accuracy: 89.722% (+/-1.371)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
(7352, 128, 9) (7352, 1) (2947, 128, 9) (2947, 1) (7352, 128, 9) (7352, 6) (2947, 128, 9) (2947, 6) >#1: 90.058 >#2: 85.918 >#3: 90.974 >#4: 89.515 >#5: 90.159 >#6: 91.110 >#7: 89.718 >#8: 90.295 >#9: 89.447 >#10: 90.024 [90.05768578215134, 85.91788259246692, 90.97387173396675, 89.51476077366813, 90.15948422124194, 91.10960298608755, 89.71835765184933, 90.29521547336275, 89.44689514760775, 90.02375296912113]Accuracy: 89.722% (+/-1.371) |
Now that we have seen how to develop an LSTM model for time series classification, let’s look at how we can develop a more sophisticated CNN LSTM model.
Develop a CNN-LSTM Network Model
The CNN LSTM architecture involves using Convolutional Neural Network (CNN) layers for feature extraction on input data combined with LSTMs to support sequence prediction.
CNN LSTMs were developed for visual time series prediction problems and the application of generating textual descriptions from sequences of images (e.g. videos). Specifically, the problems of:
- Activity Recognition: Generating a textual description of an activity demonstrated in a sequence of images.
- Image Description: Generating a textual description of a single image.
- Video Description: Generating a textual description of a sequence of images.
You can learn more about the CNN LSTM architecture in the post:
To learn more about the consequences of combining these models, see the paper:
The CNN LSTM model will read subsequences of the main sequence in as blocks, extract features from each block, then allow the LSTM to interpret the features extracted from each block.
One approach to implementing this model is to split each window of 128 time steps into subsequences for the CNN model to process. For example, the 128 time steps in each window can be split into four subsequences of 32 time steps.
# reshape data into time steps of sub-sequences
n_steps, n_length = 4, 32
trainX = trainX.reshape((trainX.shape[0], n_steps, n_length, n_features))
testX = testX.reshape((testX.shape[0], n_steps, n_length, n_features))
# reshape data into time steps of sub-sequences n_steps, n_length = 4, 32 trainX = trainX.reshape((trainX.form[[[[0], n_steps, n_length, n_features)) testX = testX.reshape((testX.form[[[[0], n_steps, n_length, n_features)) |
We can then define a CNN model that expects to read in sequences with a length of 32 time steps and nine features.
The entire CNN model can be wrapped in a TimeDistributed layer to allow the same CNN model to read in each of the four subsequences in the window. The extracted features are then flattened and provided to the LSTM model to read, extracting its own features before a final mapping to an activity is made.
# define model
model = Sequential()
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=3, activation='relu'), input_shape=(None,n_length,n_features)))
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=3, activation='relu')))
model.add(TimeDistributed(Dropout(0.5)))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(100))
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
# define model template = Sequential() template.insert(TimeDistributed(Conv1D(filters=64, kernel_size=3, Attivazione='relu'), input_shape=(None,n_length,n_features))) template.insert(TimeDistributed(Conv1D(filters=64, kernel_size=3, Attivazione='relu'))) template.insert(TimeDistributed(Dropout(0.5))) template.insert(TimeDistributed(MaxPooling1D(pool_size=2))) template.insert(TimeDistributed(Flatten())) template.insert(LSTM(100)) template.insert(Dropout(0.5)) template.insert(Dense(100, Attivazione='relu')) template.insert(Dense(n_outputs, Attivazione='softmax')) |
It is common to use two consecutive CNN layers followed by dropout and a max pooling layer, and that is the simple structure used in the CNN LSTM model here.
The updated evaluate_model() is listed below.
# fit and evaluate a model
def evaluate_model(trainX, trainy, testX, testy):
# define model
verbose, epochs, batch_size = 0, 25, 64
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
# reshape data into time steps of sub-sequences
n_steps, n_length = 4, 32
trainX = trainX.reshape((trainX.shape[0], n_steps, n_length, n_features))
testX = testX.reshape((testX.shape[0], n_steps, n_length, n_features))
# define model
model = Sequential()
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=3, activation='relu'), input_shape=(None,n_length,n_features)))
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=3, activation='relu')))
model.add(TimeDistributed(Dropout(0.5)))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(100))
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[‘accuracy’])
# fit network
model.fit(trainX, trainy, epochs=epochs, batch_size=batch_size, verbose=verbose)
# evaluate model
_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)
return accuracy
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
# fit and evaluate a model def evaluate_model(trainX, trainy, testX, testy): # define model verboso, epochs, batch_size = 0, 25, 64 n_timesteps, n_features, n_outputs = trainX.form[[[[1], trainX.form[[[[2], trainy.form[[[[1] # reshape data into time steps of sub-sequences n_steps, n_length = 4, 32 trainX = trainX.reshape((trainX.form[[[[0], n_steps, n_length, n_features)) testX = testX.reshape((testX.form[[[[0], n_steps, n_length, n_features)) # define model template = Sequential() template.insert(TimeDistributed(Conv1D(filters=64, kernel_size=3, Attivazione='relu'), input_shape=(None,n_length,n_features))) template.insert(TimeDistributed(Conv1D(filters=64, kernel_size=3, Attivazione='relu'))) template.insert(TimeDistributed(Dropout(0.5))) template.insert(TimeDistributed(MaxPooling1D(pool_size=2))) template.insert(TimeDistributed(Flatten())) template.insert(LSTM(100)) template.insert(Dropout(0.5)) template.insert(Dense(100, Attivazione='relu')) template.insert(Dense(n_outputs, Attivazione='softmax')) template.fill in(lost='categorical_crossentropy', ottimizzatore='adam', metrics=[[[['accuracy']) # fit network template.in shape(trainX, trainy, epochs=epochs, batch_size=batch_size, verboso=verboso) # evaluate model _, precision = template.to evaluate(testX, testy, batch_size=batch_size, verboso=0) return precision |
We can evaluate this model as we did the straight LSTM model in the previous section.
The complete code listing is provided below.
# cnn lstm model
from numpy import mean
from numpy import std
from numpy import dstack
from pandas import read_csv
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Dropout
from keras.layers import LSTM
from keras.layers import TimeDistributed
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.utils import to_categorical
from matplotlib import pyplot
# load a single file as a numpy array
def load_file(filepath):
dataframe = read_csv(filepath, header=None, delim_whitespace=True)
return dataframe.values
# load a list of files and return as a 3d numpy array
def load_group(filenames, prefix=''):
loaded = list()
for name in filenames:
data = load_file(prefix + name)
loaded.append(data)
# stack group so that features are the 3rd dimension
loaded = dstack(loaded)
return loaded
# load a dataset group, such as train or test
def load_dataset_group(group, prefix=''):
filepath = prefix + group + '/Inertial Signals/'
# load all 9 files as a single array
filenames = list()
# total acceleration
filenames += [‘total_acc_x_’+group+’.txt’, ‘total_acc_y_’+group+’.txt’, ‘total_acc_z_’+group+’.txt’]
# body acceleration
filenames += [‘body_acc_x_’+group+’.txt’, ‘body_acc_y_’+group+’.txt’, ‘body_acc_z_’+group+’.txt’]
# body gyroscope
filenames += [‘body_gyro_x_’+group+’.txt’, ‘body_gyro_y_’+group+’.txt’, ‘body_gyro_z_’+group+’.txt’]
# load input data
X = load_group(filenames, filepath)
# load class output
y = load_file(prefix + group + '/y_'+group+'.txt')
return X, y
# load the dataset, returns train and test X and y elements
def load_dataset(prefix=''):
# load all train
trainX, trainy = load_dataset_group('train', prefix + 'HARDataset/')
print(trainX.shape, trainy.shape)
# load all test
testX, testy = load_dataset_group('test', prefix + 'HARDataset/')
print(testX.shape, testy.shape)
# zero-offset class values
trainy = trainy – 1
testy = testy – 1
# one hot encode y
trainy = to_categorical(trainy)
testy = to_categorical(testy)
print(trainX.shape, trainy.shape, testX.shape, testy.shape)
return trainX, trainy, testX, testy
# fit and evaluate a model
def evaluate_model(trainX, trainy, testX, testy):
# define model
verbose, epochs, batch_size = 0, 25, 64
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
# reshape data into time steps of sub-sequences
n_steps, n_length = 4, 32
trainX = trainX.reshape((trainX.shape[0], n_steps, n_length, n_features))
testX = testX.reshape((testX.shape[0], n_steps, n_length, n_features))
# define model
model = Sequential()
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=3, activation='relu'), input_shape=(None,n_length,n_features)))
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=3, activation='relu')))
model.add(TimeDistributed(Dropout(0.5)))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(100))
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[‘accuracy’])
# fit network
model.fit(trainX, trainy, epochs=epochs, batch_size=batch_size, verbose=verbose)
# evaluate model
_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)
return accuracy
# summarize scores
def summarize_results(scores):
print(scores)
m, s = mean(scores), std(scores)
print('Accuracy: %.3f%% (+/-%.3f)' % (m, s))
# run an experiment
def run_experiment(repeats=10):
# load data
trainX, trainy, testX, testy = load_dataset()
# repeat experiment
scores = list()
for r in range(repeats):
score = evaluate_model(trainX, trainy, testX, testy)
score = score * 100.0
print('>#%d: %.3f' % (r+1, score))
scores.append(score)
# summarize results
summarize_results(scores)
# run the experiment
run_experiment()
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
# cnn lstm model from numpy import mean from numpy import std from numpy import dstack from panda import read_csv from keras.Modelli import Sequential from keras.strati import Dense from keras.strati import Flatten from keras.strati import Dropout from keras.strati import LSTM from keras.strati import TimeDistributed from keras.strati.convoluzionale import Conv1D from keras.strati.convoluzionale import MaxPooling1D from keras.utils import to_categorical from matplotlib import pyplot # load a single file as a numpy array def load_file(filepath): dataframe = read_csv(filepath, heading=None, delim_whitespace=True) return dataframe.values # load a list of files and return as a 3d numpy array def load_group(filenames, prefix=& # 39; & # 39;): loaded = list() for first name in filenames: data = load_file(prefix + first name) loaded.to add(data) # stack group so that features are the 3rd dimension loaded = dstack(loaded) return loaded # load a dataset group, such as train or test def load_dataset_group(group, prefix=& # 39; & # 39;): filepath = prefix + group + '/Inertial Signals/' # load all 9 files as a single array filenames = list() # total acceleration filenames += [[[['total_acc_x_'+group+'.txt', 'total_acc_y_'+group+'.txt', 'total_acc_z_'+group+'.txt'] # body acceleration filenames += [[[['body_acc_x_'+group+'.txt', 'body_acc_y_'+group+'.txt', 'body_acc_z_'+group+'.txt'] # body gyroscope filenames += [[[['body_gyro_x_'+group+'.txt', 'body_gyro_y_'+group+'.txt', 'body_gyro_z_'+group+'.txt'] # load input data X = load_group(filenames, filepath) # load class output y = load_file(prefix + group + '/y_'+group+'.txt') return X, y # load the dataset, returns train and test X and y elements def load_dataset(prefix=& # 39; & # 39;): # load all train trainX, trainy = load_dataset_group('train', prefix + 'HARDataset/') print(trainX.form, trainy.form) # load all test testX, testy = load_dataset_group('test', prefix + 'HARDataset/') print(testX.form, testy.form) # zero-offset class values trainy = trainy – 1 testy = testy – 1 # one hot encode y trainy = to_categorical(trainy) testy = to_categorical(testy) print(trainX.form, trainy.form, testX.form, testy.form) return trainX, trainy, testX, testy # fit and evaluate a model def evaluate_model(trainX, trainy, testX, testy): # define model verboso, epochs, batch_size = 0, 25, 64 n_timesteps, n_features, n_outputs = trainX.form[[[[1], trainX.form[[[[2], trainy.form[[[[1] # reshape data into time steps of sub-sequences n_steps, n_length = 4, 32 trainX = trainX.reshape((trainX.form[[[[0], n_steps, n_length, n_features)) testX = testX.reshape((testX.form[[[[0], n_steps, n_length, n_features)) # define model template = Sequential() template.insert(TimeDistributed(Conv1D(filters=64, kernel_size=3, Attivazione='relu'), input_shape=(None,n_length,n_features))) template.insert(TimeDistributed(Conv1D(filters=64, kernel_size=3, Attivazione='relu'))) template.insert(TimeDistributed(Dropout(0.5))) template.insert(TimeDistributed(MaxPooling1D(pool_size=2))) template.insert(TimeDistributed(Flatten())) template.insert(LSTM(100)) template.insert(Dropout(0.5)) template.insert(Dense(100, Attivazione='relu')) template.insert(Dense(n_outputs, Attivazione='softmax')) template.fill in(lost='categorical_crossentropy', ottimizzatore='adam', metrics=[[[['accuracy']) # fit network template.in shape(trainX, trainy, epochs=epochs, batch_size=batch_size, verboso=verboso) # evaluate model _, precision = template.to evaluate(testX, testy, batch_size=batch_size, verboso=0) return precision # summarize scores def summarize_results(scores): print(scores) m, S = mean(scores), std(scores) print('Accuracy: %.3f%% (+/-%.3f)' % (m, S)) # run an experiment def run_experiment(repeats=10): # load data trainX, trainy, testX, testy = load_dataset() # repeat experiment scores = list() for r in range(repeats): Point = evaluate_model(trainX, trainy, testX, testy) Point = score * 100.0 print('>#%d: %.3f' % (r+1, Point)) scores.to add(Point) # summarize results summarize_results(scores) # run the experiment run_experiment() |
Running the example summarizes the model performance for each of the 10 runs before a final summary of the models performance on the test set is reported.
We can see that the model achieved a performance of about 90.6% with a standard deviation of about 1%.
Note: given the stochastic nature of the algorithm, your specific results may vary. If so, try running the code a few times.
>#1: 91.517
>#2: 91.042
>#3: 90.804
>#4: 92.263
>#5: 89.684
>#6: 88.666
>#7: 91.381
>#8: 90.804
>#9: 89.379
>#10: 91.347
Accuracy: 90.689% (+/-1.051)
>#1: 91.517 >#2: 91.042 >#3: 90.804 >#4: 92.263 >#5: 89.684 >#6: 88.666 >#7: 91.381 >#8: 90.804 >#9: 89.379 >#10: 91.347 [91.51679674244994, 91.04173736002714, 90.80420766881574, 92.26331862911435, 89.68442483881914, 88.66644044791313, 91.38106549032915, 90.80420766881574, 89.37902952154734, 91.34713267729894]Accuracy: 90.689% (+/-1.051) |
Develop a ConvLSTM Network Model
A further extension of the CNN LSTM idea is to perform the convolutions of the CNN (e.g. how the CNN reads the input sequence data) as part of the LSTM.
This combination is called a Convolutional LSTM, or ConvLSTM for short, and like the CNN LSTM is also used for spatio-temporal data.
Unlike an LSTM that reads the data in directly in order to calculate internal state and state transitions, and unlike the CNN LSTM that is interpreting the output from CNN models, the ConvLSTM is using convolutions directly as part of reading input into the LSTM units themselves.
For more information for how the equations for the ConvLSTM are calculated within the LSTM unit, see the paper:
The Keras library provides the ConvLSTM2D class that supports the ConvLSTM model for 2D data. It can be configured for 1D multivariate time series classification.
The ConvLSTM2D class, by default, expects input data to have the shape:
(samples, time, rows, cols, channels)
(samples, time, rows, cols, channels) |
Where each time step of data is defined as an image of (rows * columns) data points.
In the previous section, we divided a given window of data (128 time steps) into four subsequences of 32 time steps. We can use this same subsequence approach in defining the ConvLSTM2D input where the number of time steps is the number of subsequences in the window, the number of rows is 1 as we are working with one-dimensional data, and the number of columns represents the number of time steps in the subsequence, in this case 32.
For this chosen framing of the problem, the input for the ConvLSTM2D would therefore be:
- Samples: n, for the number of windows in the dataset.
- Time: 4, for the four subsequences that we split a window of 128 time steps into.
- Rows: 1, for the one-dimensional shape of each subsequence.
- Columns: 32, for the 32 time steps in an input subsequence.
- channels: 9, for the nine input variables.
We can now prepare the data for the ConvLSTM2D model.
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
# reshape into subsequences (samples, time steps, rows, cols, channels)
n_steps, n_length = 4, 32
trainX = trainX.reshape((trainX.shape[0], n_steps, 1, n_length, n_features))
testX = testX.reshape((testX.shape[0], n_steps, 1, n_length, n_features))
n_timesteps, n_features, n_outputs = trainX.form[[[[1], trainX.form[[[[2], trainy.form[[[[1] # reshape into subsequences (samples, time steps, rows, cols, channels) n_steps, n_length = 4, 32 trainX = trainX.reshape((trainX.form[[[[0], n_steps, 1, n_length, n_features)) testX = testX.reshape((testX.form[[[[0], n_steps, 1, n_length, n_features)) |
The ConvLSTM2D class requires configuration both in terms of the CNN and the LSTM. This includes specifying the number of filters (e.g. 64), the two-dimensional kernel size, in this case (1 row and 3 columns of the subsequence time steps), and the activation function, in this case rectified linear.
As with a CNN or LSTM model, the output must be flattened into one long vector before it can be interpreted by a dense layer.
# define model
model = Sequential()
model.add(ConvLSTM2D(filters=64, kernel_size=(1,3), activation='relu', input_shape=(n_steps, 1, n_length, n_features)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
# define model template = Sequential() template.insert(ConvLSTM2D(filters=64, kernel_size=(1,3), Attivazione='relu', input_shape=(n_steps, 1, n_length, n_features))) template.insert(Dropout(0.5)) template.insert(Flatten()) template.insert(Dense(100, Attivazione='relu')) template.insert(Dense(n_outputs, Attivazione='softmax')) |
We can then evaluate the model as we did the LSTM and CNN LSTM models before it.
The complete example is listed below.
# convlstm model
from numpy import mean
from numpy import std
from numpy import dstack
from pandas import read_csv
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Dropout
from keras.layers import LSTM
from keras.layers import TimeDistributed
from keras.layers import ConvLSTM2D
from keras.utils import to_categorical
from matplotlib import pyplot
# load a single file as a numpy array
def load_file(filepath):
dataframe = read_csv(filepath, header=None, delim_whitespace=True)
return dataframe.values
# load a list of files and return as a 3d numpy array
def load_group(filenames, prefix=''):
loaded = list()
for name in filenames:
data = load_file(prefix + name)
loaded.append(data)
# stack group so that features are the 3rd dimension
loaded = dstack(loaded)
return loaded
# load a dataset group, such as train or test
def load_dataset_group(group, prefix=''):
filepath = prefix + group + '/Inertial Signals/'
# load all 9 files as a single array
filenames = list()
# total acceleration
filenames += [‘total_acc_x_’+group+’.txt’, ‘total_acc_y_’+group+’.txt’, ‘total_acc_z_’+group+’.txt’]
# body acceleration
filenames += [‘body_acc_x_’+group+’.txt’, ‘body_acc_y_’+group+’.txt’, ‘body_acc_z_’+group+’.txt’]
# body gyroscope
filenames += [‘body_gyro_x_’+group+’.txt’, ‘body_gyro_y_’+group+’.txt’, ‘body_gyro_z_’+group+’.txt’]
# load input data
X = load_group(filenames, filepath)
# load class output
y = load_file(prefix + group + '/y_'+group+'.txt')
return X, y
# load the dataset, returns train and test X and y elements
def load_dataset(prefix=''):
# load all train
trainX, trainy = load_dataset_group('train', prefix + 'HARDataset/')
print(trainX.shape, trainy.shape)
# load all test
testX, testy = load_dataset_group('test', prefix + 'HARDataset/')
print(testX.shape, testy.shape)
# zero-offset class values
trainy = trainy – 1
testy = testy – 1
# one hot encode y
trainy = to_categorical(trainy)
testy = to_categorical(testy)
print(trainX.shape, trainy.shape, testX.shape, testy.shape)
return trainX, trainy, testX, testy
# fit and evaluate a model
def evaluate_model(trainX, trainy, testX, testy):
# define model
verbose, epochs, batch_size = 0, 25, 64
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
# reshape into subsequences (samples, time steps, rows, cols, channels)
n_steps, n_length = 4, 32
trainX = trainX.reshape((trainX.shape[0], n_steps, 1, n_length, n_features))
testX = testX.reshape((testX.shape[0], n_steps, 1, n_length, n_features))
# define model
model = Sequential()
model.add(ConvLSTM2D(filters=64, kernel_size=(1,3), activation='relu', input_shape=(n_steps, 1, n_length, n_features)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[‘accuracy’])
# fit network
model.fit(trainX, trainy, epochs=epochs, batch_size=batch_size, verbose=verbose)
# evaluate model
_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)
return accuracy
# summarize scores
def summarize_results(scores):
print(scores)
m, s = mean(scores), std(scores)
print('Accuracy: %.3f%% (+/-%.3f)' % (m, s))
# run an experiment
def run_experiment(repeats=10):
# load data
trainX, trainy, testX, testy = load_dataset()
# repeat experiment
scores = list()
for r in range(repeats):
score = evaluate_model(trainX, trainy, testX, testy)
score = score * 100.0
print('>#%d: %.3f' % (r+1, score))
scores.append(score)
# summarize results
summarize_results(scores)
# run the experiment
run_experiment()
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
# convlstm model from numpy import mean from numpy import std from numpy import dstack from panda import read_csv from keras.Modelli import Sequential from keras.strati import Dense from keras.strati import Flatten from keras.strati import Dropout from keras.strati import LSTM from keras.strati import TimeDistributed from keras.strati import ConvLSTM2D from keras.utils import to_categorical from matplotlib import pyplot # load a single file as a numpy array def load_file(filepath): dataframe = read_csv(filepath, heading=None, delim_whitespace=True) return dataframe.values # load a list of files and return as a 3d numpy array def load_group(filenames, prefix=& # 39; & # 39;): loaded = list() for first name in filenames: data = load_file(prefix + first name) loaded.to add(data) # stack group so that features are the 3rd dimension loaded = dstack(loaded) return loaded # load a dataset group, such as train or test def load_dataset_group(group, prefix=& # 39; & # 39;): filepath = prefix + group + '/Inertial Signals/' # load all 9 files as a single array filenames = list() # total acceleration filenames += [[[['total_acc_x_'+group+'.txt', 'total_acc_y_'+group+'.txt', 'total_acc_z_'+group+'.txt'] # body acceleration filenames += [[[['body_acc_x_'+group+'.txt', 'body_acc_y_'+group+'.txt', 'body_acc_z_'+group+'.txt'] # body gyroscope filenames += [[[['body_gyro_x_'+group+'.txt', 'body_gyro_y_'+group+'.txt', 'body_gyro_z_'+group+'.txt'] # load input data X = load_group(filenames, filepath) # load class output y = load_file(prefix + group + '/y_'+group+'.txt') return X, y # load the dataset, returns train and test X and y elements def load_dataset(prefix=& # 39; & # 39;): # load all train trainX, trainy = load_dataset_group('train', prefix + 'HARDataset/') print(trainX.form, trainy.form) # load all test testX, testy = load_dataset_group('test', prefix + 'HARDataset/') print(testX.form, testy.form) # zero-offset class values trainy = trainy – 1 testy = testy – 1 # one hot encode y trainy = to_categorical(trainy) testy = to_categorical(testy) print(trainX.form, trainy.form, testX.form, testy.form) return trainX, trainy, testX, testy # fit and evaluate a model def evaluate_model(trainX, trainy, testX, testy): # define model verboso, epochs, batch_size = 0, 25, 64 n_timesteps, n_features, n_outputs = trainX.form[[[[1], trainX.form[[[[2], trainy.form[[[[1] # reshape into subsequences (samples, time steps, rows, cols, channels) n_steps, n_length = 4, 32 trainX = trainX.reshape((trainX.form[[[[0], n_steps, 1, n_length, n_features)) testX = testX.reshape((testX.form[[[[0], n_steps, 1, n_length, n_features)) # define model template = Sequential() template.insert(ConvLSTM2D(filters=64, kernel_size=(1,3), Attivazione='relu', input_shape=(n_steps, 1, n_length, n_features))) template.insert(Dropout(0.5)) template.insert(Flatten()) template.insert(Dense(100, Attivazione='relu')) template.insert(Dense(n_outputs, Attivazione='softmax')) template.fill in(lost='categorical_crossentropy', ottimizzatore='adam', metrics=[[[['accuracy']) # fit network template.in shape(trainX, trainy, epochs=epochs, batch_size=batch_size, verboso=verboso) # evaluate model _, precision = template.to evaluate(testX, testy, batch_size=batch_size, verboso=0) return precision # summarize scores def summarize_results(scores): print(scores) m, S = mean(scores), std(scores) print('Accuracy: %.3f%% (+/-%.3f)' % (m, S)) # run an experiment def run_experiment(repeats=10): # load data trainX, trainy, testX, testy = load_dataset() # repeat experiment scores = list() for r in range(repeats): Point = evaluate_model(trainX, trainy, testX, testy) Point = score * 100.0 print('>#%d: %.3f' % (r+1, Point)) scores.to add(Point) # summarize results summarize_results(scores) # run the experiment run_experiment() |
As with the prior experiments, running the model prints the performance of the model each time it is fit and evaluated. A summary of the final model performance is presented at the end of the run.
We can see that the model does consistently perform well on the problem achieving an accuracy of about 90%, perhaps with fewer resources than the larger CNN LSTM model.
Note: given the stochastic nature of the algorithm, your specific results may vary. If so, try running the code a few times.
>#1: 90.092
>#2: 91.619
>#3: 92.128
>#4: 90.533
>#5: 89.243
>#6: 90.940
>#7: 92.026
>#8: 91.008
>#9: 90.499
>#10: 89.922
Accuracy: 90.801% (+/-0.886)
>#1: 90.092 >#2: 91.619 >#3: 92.128 >#4: 90.533 >#5: 89.243 >#6: 90.940 >#7: 92.026 >#8: 91.008 >#9: 90.499 >#10: 89.922 [90.09161859518154, 91.61859518154056, 92.12758737699356, 90.53274516457415, 89.24329826942655, 90.93993892093654, 92.02578893790296, 91.00780454699695, 90.49881235154395, 89.92195453003053]Accuracy: 90.801% (+/-0.886) |
Extensions
This section lists some ideas for extending the tutorial that you may wish to explore.
- Data Preparation. Consider exploring whether simple data scaling schemes can further lift model performance, such as normalization, standardization, and power transforms.
- LSTM Variations. There are variations of the LSTM architecture that may achieve better performance on this problem, such as stacked LSTMs and Bidirectional LSTMs.
- Hyperparameter Tuning. Consider exploring tuning of model hyperparameters such as the number of units, training epochs, batch size, and more.
If you explore any of these extensions, I’d love to know.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Papers
Articles
Summary
In this tutorial, you discovered three recurrent neural network architectures for modeling an activity recognition time series classification problem.
Specifically, you learned:
- How to develop a Long Short-Term Memory Recurrent Neural Network for human activity recognition.
- How to develop a one-dimensional Convolutional Neural Network LSTM, or CNN LSTM, model.
- How to develop a one-dimensional Convolutional LSTM, or ConvLSTM, model for the same problem.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Source link