Thanks for the interest in TNet! At the moment let's consider it as a dead project, as I fully switched the efforts to 'nnet1' recipe in kaldi: http://kaldi.sourceforge.net/dnn1.html. You can still use the TNet, but it is not to be extended anymore. Thanks!
TNet is a tool for parallel training of neural networks for classification, containing two independent sets of tools, the CPU and GPU tools. The CPU training is based on multithread data-parallelization, the GPU training is implemented in CUDA, both are implementing mini-batch Stochastic Gradient Descent, optimizing per-frame Cross-entropy.
The toolkit contains example of NN training on TIMIT, which can be easily transfered to your data. You may also be interested in hierarchical "Universal Context Network" a.k.a. "Stacked bottleneck netowork", which can be built using one-touch-script : tools/train/tnet_train_uc.sh
The package contains sourcecodes, libraries and training example:
TNet_v1_25_r226_NewCUDA.tar.gz
The project TNet depends on libraries:
Philosophically, the neural network is viewed as a sequence of linear/nonlinear transforms (BiasedLinearity, Sigmoid or Softmax), which are able to propagate, backpropagate and update. It can be extended by your own transforms (see interfaces Component and UpdatableComponent). Both the Cross-entropy and Mean-Square-Error training criterions are implemented. The feature/label files are HTK compatible (using MLF format for labels).
At the moment TNet is not being actively extended, as the author transfered his focus to Kaldi toolkit.
For support (compilation issues,...), please contact iveselyk(at)fit.vutbr.cz
The tool is distributed under Apache v2.0 licence: http://www.apache.org/licenses/LICENSE-2.0.txt