Skip to content
Snippets Groups Projects
Select Git revision
  • biggerdata
  • master default
  • normalise
3 results
You can move around the graph by using the arrow keys.
Created with Raphaël 2.2.019Jun27May242120191812117230Apr27242322131Mar25Feb2418171618Janfixed bugs and added a lot of resultsbiggerdatabiggerdatafixed data augmentation bugsswitched to processing things with numpy, very fast whilst threadingasynchronous is still quite slow increased thread count to 2added asynchronous prefetching of data otherwise training was simply too slowfixed temporal model over videos and should have fixed temporal restoringadding plotting for the normal convnets and have evaluation over long videos working but not classifying very wellAdded graphing capabilities and almost final networks, still large errors thoughfixed formatting bug, changed temporal modelAll networks should technically be functionin. Seeign some strange results on temporal. Tidied up codeGCNN should be conneted to temporal model for trainingAdded loading of vidlets to be trained, and fixed the original networkresliased numpy could not deal with such large arrays so implemented the random batching, currently running nowremoved prints and fixed network structureseems to able to train on the TCNN, now need to get the evaluation working, and fix restoring the modelbasics of the network should be ok, now need to make it run properlyAnnoying edge case in getting batch data fixedNormal network should be working, with starts on the temporal model. Fixed a bug in classification reportIncludes the working benchmark, and should be able to process and amount of dataaltered master to only have 14 labels, but failure on savingmastermasterchanged folder path for datatried to fiddle with array types to fix training, but to no avail yetThe big data is successfully recombined into one big numpy array however cuda says out of memoryI have split the datasets using h5py overcoming numpys problem of having too much data.added label manager to tidy up codenormalisenormalisechanging the code to process the larger data, cannot save the train data, as it is too large, this is a known problem in python 2.7, I will fix this in the next commitAdded code for preprocessing large iFind dataset.More features in 2nd layer I believe suffered from overfitting, reverting changechanged the number of features and size of kernelsRemoved dropout in training and received much better resultsRemoved .npy filesRemoved compiled files and .npy filesTried Adding tensorboard to visualise the network. Also added batch normalisationinitial commit
Loading