Augmented Recurrent Neural Networks: Week 11

As we keep approaching the finale of the GSoC program, the HAM work also gets deeper into the implementation stage. For instance, we have finally implemented and tested the implementation of HAMUnit forward pass.

Right now, we're trying to implement the backward pass, but there is still a technical question about the representation of the HAM parts. The current ideas are:

  • Adding FFN to LayerTypes. Although the declaration becomes template-less, it has a drawback of not supporting FFN<T> for all T.
  • Using FFN objects "as-is" (e.g., calling their Forward and Backward methods). This is a rather straightforward approach (which I'm following right now to simplify matters), but it has a drawback of constrainting to FFN models.
  • Taking FFN objects and storing their layers separately. This approach fixes the drawbacks of the previous two approaches, but it doesn't seem easy to implement - and also we still have to figure out if we do need it in the HAM setting.

So, the final week is going to be interesting - hope to get things done! :)

more...

Profiling for parallelization and parallel stochastic optimization methods - Week 10 & 11

The past two weeks were spent on finishing up the implementation of SCD(adding the Greedy descent policy based on GS rule), adding more tests for the new code and making changes to existing functions to make them compatible with the ResolvableFunctionType requirements. Some documentation outlining the various FunctionType interfaces was also added to highlight the minor differences and applications of these abstractions.

A minor inconsistency needs to be resolved which would require some simple refactoring to make the layout of the decision variables consistent across various functions in mlpack so that SCD could work on disjoint parts of the decision variable (required for parallelization).

I am planning to finish up the refactoring and parallelisation part within the next 1-2 days. The next steps would be to benchmark the implemenatation on a few datasets to get an overview of the performance and find any areas of improvement.

more...

Cross-Validation and Hyper-Parameter Tuning: Week 11

Last week I was primarilly finishing working on k-fold cross-validation and the main part of the hyper-parameter tuning module. Sending a PR for k-fold cross-validation to the main repository was delayed since there was a bug in the linear regression copy constructor which caused failing for one of k-fold cross-validation tests. Now it is fixed, and a PR for k-fold cross-validation has been sent.

As I have already mentioned, I was also finishing working on the main part of the hyper-parameter tuning module. Mainly it concerns using a new interface for mlpack optimizers that is about accepting a DatasetMapper parameter describing data type and possible values (if it is of categorical type) for each dimension (that corresponds to a hyper-parameter in the case of hyper-parameter optimization).

During the remaining time of GSoC I'm going to work on supporting gradient decent for hyper-parameter tuning, as well as to write a final report.

more...

Neural Evolution Algorithms for NES games - Week 9 Progress

This week was amazing for me. I completed CNE optimizer that converges based on probability. Two things that really speeded up the process for me was below mentioned paper and last year gsoc student Bang Lui code implemented using his own genome class structure.

The CNE optimizer have been implemented based on the structure that has tobe followed in MLPack optimizers. Also the code very well uses the armadillo library and its functions. My mentor helped me throughout and provided the necessary start points and fast code reviews and corrections.

Boost test case have been written for-

  • simple XOR Task
  • logistic regression using CNE optimizer
  • Vanilla network trained and tested on a larger dataset. (Thyroid dataset in this case)

Apart from that a complete doxgen tutorial have been made. Where step by step example code along with the alogrithm have been listed -

  • Detailed explanation about the constructor parameters and how to use it.
  • About feed forward neural network library and how to use it using simple code.
  • Putting it all together and then training a vanilla network for XOR task with a complete example.
  • using the optimizer with other model. We converged logistic regression model over here along with sample code.

So now the PR is ready to be merged.

The next part is the most important and interesting part where i will get to run the NEAT implementation of the code on the legendary game Super Mario Bros. The code was implemented previously by Bang Lui using his very nicely implemented genome, species and popluation class. But shows a glitch somewhere in between that stops it to complete level 1. Connection is made using Lua scripts and data is transferred using JSON format. My mentor zoq have already done that. My work would be to find the bug and hopefully get the PR merged in the main repository.

more...

Deep Reinforcement Learning Methods - Week-9 Highlights

This week I released the PR for async one step q learning and async one step sarsa, it's under review now and I believe it will be merged soon. I also worked on A3C. I implemented a wrapper network for actor and critic, and added a new reinforce layer for policy gradient. Current architecture of ANN doesn't support shared layers, which is necessary in A3C. Use shared_ptr can address this problem, however it may lead to overhead and may make it inconvenient for user to add new layer types. After discussing with Ryan, we decided to use AliasLayer, the main challenge is to make it compatible with member function checkers like HasParameters(). I thought I could address this issue by overloading or specializing boost::apply_visitor before, but soon I realized it's impossible. Maybe I have to add overload functions for AliasLayer template argument for all visitors.

more...

Tests of Atom domain code for vector problems - Week 9 and 10

I finally fixed all the API issues and many bugs, and opened a new PR for vector problems code. It contains the code for:

  1. New Atom class API.
  2. Line search update step for classic Frank Wolfe Algorithm.
  3. Regularization of lp ball constraint domain.
  4. Structured group constraint type in Jaggi's paper.
  5. Support prune of OMP.
  6. Full Corrective Update rule. (Update with atom norm constraint.)

Almost all the above codes are technologies to make the algorithms either converge faster or give better solutions for ill problems. I tried to design some examples in the tests/ folder to make the above technologies to show their powers. However, it seems not possible to use small test problems to do that. So, the tests I currently wrote only check the convergence of the algorithms, which basically indicate that the implementation is correct. As Stephen sugggested, to keep one PR not too large to review, I will push all the comparisons codes for large scale problems in my next PR.

more...

Deep Learning Module in MLpack(Week 10)

Week Ten

The past week has seen some good progress. I was finally able to finish the ssRBM & binary RBM PR. I also made some progress on fixing the GAN network. Mostly I just cleaned up the code for the GAN network and just added a diff rent initialization strategy( Insitalising weights based on per layer basis). This actually fixed the error with the vanishing gradients that we were experincning with the GAN PR. I also a added a simpler test to check our implementation of this was based upon the Edwin Chen's blog and the test that Goodfellow's original paper is based on. Though the test meant more for showing that the generated outputs are very close to the real data.

The goal for the next week is mainly finishing up with the GAN PR. The major problem as pointed out by Mikhail is that of the CreateBatch function. I think that needs refactoring.

more...

Augmented Recurrent Neural Networks: Week 10

Since my benchmarking PR was finally merged, I started working full-time on implementing HAM unit. Of course, this is a very complicated task, so it also deserves some segmentation.

Currently we have (implicitly) agreed on these parts:

  • implementing TreeMemory (status: almost done, but some new changes don't compile);
  • implementing forward pass of HAMUnit (status: there is some plausible code, but we can't really test it due to the TreeMemory bug);
  • testing forward pass of HAMUnit (status: all components for test case are ready and waiting for the first two stages to resolve);
  • implementing and testing backward pass of HAMUnit (status: there are some ideas on how to implement it - more in the final two weeks).

As you can see, there is a lot of ongoing work - more details to follow in the final two weeks.

more...

Cross-Validation and Hyper-Parameter Tuning: Week 10

During the week I was primarily putting the finishing touches to the cross-validation code. To my delight the simple cross-validation strategy finally got merged. I should be able to add final edits to k-fold cross-validation shortly, so, hopefully it will be ready to be merged soon.

I'm also working with my mentor on getting the hyper-parameter tuner merged. Now it's hard to predict how much time it will take to finish with it.

more...

Neural Evolution Algorithms for NES games - Week 8 Progress

this week have been really exciting for me. I have completed CMAES algorithm. The main highlights are-

1) introduced a new time parameter by which search can be optimised very well. This method uses CPU time and calculates time return according to implementation time for some functions. This solved a major bug in the code which was huge problem back then. Now searching for hard function optimization like the Rosenbrock function upto 50 dimensions was also found to be working accurately.

2) made certain changes to the constructor and other optimization like making own eigen decomposition functions which converges faster than armadillo library maybe as it does not makes sure if the matrix is orthogonal or not.

3) Writing the doxygen tutorial for CMAES was fun and I tried my best to make it simple and to contain everything. It explains the algorithm , the class that the user has to make, the converge method calling and also using it with other functions such as logistic regression.

4) talked to my mentor and have started the code for CNE. Im right now working on the neural network optimization part. The code of Bang Lui from last year GSoC came handy and is very helpful. The FFN library will be used and CNE will be made as an optimizer in it.

more...