mlpack IRC logs, 2020-04-09

Logs for the day 2020-04-09 (starts at 0:00 UTC) are shown below.

April 2020
Sun
Mon
Tue
Wed
Thu
Fri
Sat
 
 
 
1
2
3
4
5
6
7
8
9
10
--- Log opened Thu Apr 09 00:00:05 2020
04:14 -!- dendre [~dendre@138.229.116.186] has joined #mlpack
04:21 -!- dendre [~dendre@138.229.116.186] has quit [Quit: Leaving]
04:21 -!- dendre [~dendre@138.229.116.186] has joined #mlpack
04:24 -!- dendre [~dendre@138.229.116.186] has quit [Client Quit]
04:25 -!- dendre [~dendre@162.220.222.35] has joined #mlpack
04:49 -!- dendre [~dendre@162.220.222.35] has quit [Quit: Leaving]
07:00 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
07:33 < jenkins-mlpack2> Project docker mlpack nightly build build #667: UNSTABLE in 3 hr 19 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/667/
10:37 < chopper_inbound4> hey Joel Joseph (Gitter), can you try again if softmax layer works for you? I've made the required changes.
10:43 < AbishaiEbenezerG> @nishantkr18 I'm having a look at #1912 and would be happy to help...
11:07 < AbishaiEbenezerG> also @zoq, we do not currently have any other implementation of a policy gradient method other than PPO right?
11:10 < AbishaiEbenezerG> i mean - i couldn't find any other PR or issue related to having a pg method
11:46 < AbishaiEbenezerG> basically , i'm looking to train an agent with a simple policy gradient method and see how it fares in a gym env.
11:46 < AbishaiEbenezerG> could i open an issue on this?
12:39 < zoq> AbishaiEbenezerG: Sure, feel free.
13:27 < sreenik[m]> freenode_gitter_prince776[m]: Awesome. I was thinking about something else, this is alright
13:43 < PrinceGuptaGitte> Alright, then I'll do the most interesting part and add 'std::string name' to all the remaining layers :)
13:43 < PrinceGuptaGitte> (edited) ... I'll do the ... => ... I'll complete doing the ...
14:00 < AnjishnuGitter[m> Hi @zoq, if you find some time, could you take a look at #2345 ? Thanks!
16:12 < ShikharJaiswalGi> @rcurtin Can the existing decision tree implementations be used for regression?
16:13 < rcurtin> ShikharJaiswalGi: in practice, no, but it wouldn't be too hard to adapt it :) basically new loss functions are needed, and then a new internal structure to be held by each node to store the regression coefficients / etc. needed for prediciton
16:13 < rcurtin> *prediction
16:13 < rcurtin> it's definitely useful support that would be awesome to add :)
16:18 < ShikharJaiswalGi> Also, is the current implementation making use of gradients? Is it Gradient Boosted Decision Trees?
16:18 < rcurtin> nope, it's just a regular decision tree, but I do believe it can be trained in a weighted way
16:19 < rcurtin> so, it would be very easy to write a gradient boosting wrapper around it (in fact, you can use AdaBoost with decision trees as the weak learner to get *close* to GBDTs, but I don't think that the algorithms *exactly* line up, and AdaBoost was designed for a weak learner, not a full decision tree)
16:20 < rcurtin> our random forest implementation is basically an OpenMP for loop around the DecisionTree constructor
16:23 < ShikharJaiswalGi> Hmm, I don't think AdaBoost would be that effective, but I haven't tried a full fledged decision tree either. I've tried using DecisionStumps in the past with ensembles, they work well, though I'm not sure if they surpass individual GBTs, they certainly wouldn't in terms of training times.
16:24 < rcurtin> give it a shot, AdaBoost is old and not trendy but it is an effective boosting technique :) if your data match the assumptions of the technique (can't remember what exactly those are at the moment), it may be quite effective
16:25 < ShikharJaiswalGi> I don't think we have tree pruning support as well?
16:26 < rcurtin> there were PRs opened for MDL-based pruning, but, they were never finished and merged, unfortunately
16:26 < rcurtin> you can do something kind of like pruning by setting 'z
16:26 < rcurtin> oops, "by setting minimum_leaf_size or minimum_gain_split"
16:28 < ShikharJaiswalGi> Yeah, but that would be "pre-pruning" and not "post-pruning" I feel.
16:28 < rcurtin> exactly, you're right
16:35 < ShikharJaiswalGi> Apparently people have done past studies on Adaboost and Gradient Boost. Gradient Boosting is apparently more general than AdaBoost.
17:37 * JoelJosephGitter sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/cWFKQeACcbnFcLnYoYReFFwh >
19:04 -!- dendre [~dendre@138.229.116.186] has joined #mlpack
19:14 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
19:22 < PrinceGuptaGitte> @joeljosephjin This is because you haven't added Softmax layer to `LayerTypes` which is present in `layer_types.hpp` file.
19:23 < PrinceGuptaGitte> (edited) ... added Softmax layer ... => ... added `Softmax` layer ...
20:20 -!- togo [~togo@2a02:6d40:34f8:8901:cd01:2d98:d7d3:115a] has joined #mlpack
20:28 * JoelJosephGitter sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/rUMWnkiyiuaMsrsLAqHlhKJO >
20:30 < PrinceGuptaGitte> Wow, I have no idea why this is. Can you show the full error message (use pastebin or something like that).
20:31 < JoelJosephGitter> https://pastebin.com/1PmEefaR heres the full error
20:33 < PrinceGuptaGitte> I think you're using `std::move()`, but we don't need that with l value reference, maybe.
20:34 < PrinceGuptaGitte> (edited) ... using `std::move()`, but ... => ... using `std::move()` when calling forward and backward functions, but ...
20:34 < JoelJosephGitter> im not using it i think, i copied the code directly from the PR
20:35 < JoelJosephGitter> copy-pasted the whole stuff from softmax.hpp,softmax_impl.hpp, made changes on layer.hpp and layer_merge.hpp
20:36 < JoelJosephGitter> *layer_merge
20:36 < JoelJosephGitter> *layer_types.hpp
20:36 < JoelJosephGitter> (edited) *layer_types.hpp => copy-pasted the whole stuff from softmax.hpp,softmax_impl.hpp, made changes on layer.hpp and layer_types.hpp
22:16 -!- togo [~togo@2a02:6d40:34f8:8901:cd01:2d98:d7d3:115a] has quit [Quit: Leaving]
--- Log closed Fri Apr 10 00:00:06 2020