mlpack IRC logs, 2020-03-20

Logs for the day 2020-03-20 (starts at 0:00 UTC) are shown below.

March 2020
Sun
Mon
Tue
Wed
Thu
Fri
Sat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
--- Log opened Fri Mar 20 00:00:36 2020
01:42 < rcurtin> what? I accidentally made the mlpack/models repository as private
01:42 < rcurtin> I have no idea how that happened
01:42 < rcurtin> oops
01:42 < rcurtin> I didn't even know I could do that for an organization that doesn't have a paying account ...
03:56 < jenkins-mlpack2> Project docker mlpack weekly build build #99: STILL UNSTABLE in 6 hr 9 min: http://ci.mlpack.org/job/docker%20mlpack%20weekly%20build/99/
04:10 -!- witness [uid10044@gateway/web/irccloud.com/x-iiqmbayaezmdllve] has quit [Quit: Connection closed for inactivity]
04:22 -!- favre49 [~favre49@106.51.30.73] has joined #mlpack
04:23 < favre49> Question, does VAE still belong in the examples repo? I think not
04:24 < favre49> Also, perhaps it would make sense to put this change on the mailing list? Though that could wait till the release of 3.3
04:25 < favre49> Either way I'll put a message on relevant PRs and issues
04:31 -!- favre49 [~favre49@106.51.30.73] has quit [Quit: leaving]
04:31 -!- favre49 [~favre49@106.51.30.73] has joined #mlpack
04:33 < LakshyaOjhaGitte> hey @favre49 can you help me with some insight to how conv works in the layer?
04:34 < favre49> LakshyaOjhaGitte: Sorry, I'm not sure I know the code well enough to help you
04:34 < LakshyaOjhaGitte> Here are some [animation](https://github.com/vdumoulin/conv_arithmetic) that someone put up at github
04:35 < LakshyaOjhaGitte> No problem, just want to understand how that works in regard to the animation
04:35 < favre49> What's your doubt though?
04:36 < LakshyaOjhaGitte> It's just that as padding is done to the input and then like 4 blocks are used to generate a single output block in the animation
04:36 < LakshyaOjhaGitte> how that takes place
04:38 < favre49> Wait which animation are you looking at? I'm not sure I understand what you're saying
04:39 < LakshyaOjhaGitte> convolution animation in the link, say the fourth one
04:39 < LakshyaOjhaGitte> i thought input and output were of same sizes
04:40 < LakshyaOjhaGitte> here its different and it is using like 6 blocks of padded input(blue blocks) to generate 1 block of output (above one)
04:44 < favre49> In convolutions, size of the output depends on kernel size, input size and padding applied
04:45 < favre49> Full padding in general is meant to increase the size of the output
04:45 < LakshyaOjhaGitte> is that called upscaling? heard the term somewhere
04:46 < favre49> I've only heard that in the context of video and picture resolutions
04:47 < LakshyaOjhaGitte> okay thanks for the help :)
04:47 < favre49> Welcome, glad I was able to resolve it
04:47 < LakshyaOjhaGitte> also wanted to point out that documentation of convolution is not good I think
04:47 < LakshyaOjhaGitte> https://www.mlpack.org/doc/mlpack-git/doxygen/classmlpack_1_1ann_1_1Convolution.html
04:48 < LakshyaOjhaGitte> The detailed description section should it be improved for like if someone reads it, it gives easily understandable info
04:48 < Saksham[m]> From what I’ve read, we have transposed convolution which can be thought of as upscaling an image with convolution
04:49 < LakshyaOjhaGitte> Yup
04:50 < favre49> THe documentation you're looking at isn't really meant to be a tutorial
04:50 < Saksham[m]> I don’t think the mlpack documentation needs to be detailed enough for someone to learn what convolution means
04:50 < Saksham[m]> They’re plenty of resources online
04:50 < favre49> But we could have a tutorial for it. Unfortunately I wouldn't have the time, but feel free to add it
04:51 < favre49> Saksham[m]: You're right, but from a marketing perspective it would be great if people could come to our website and learn about CNNs, and proceed to use mlpack for it :)
04:51 < LakshyaOjhaGitte> Yeah plenty of resources there but we should not leave any chance to improve the documentation
04:51 < Saksham[m]> I’m working on the tutorial for cnn minst
04:51 < LakshyaOjhaGitte> Exactly
04:52 < Saksham[m]> I could add details about convolution there
04:52 < LakshyaOjhaGitte> yeah sure no problem with me
04:52 < LakshyaOjhaGitte> better then opening a pr for this
04:52 < Saksham[m]> Since the new examples directory works as a tutorial directory makes sense to add it there
04:53 < Saksham[m]> Sure I would explain convolution for a beginner somewhere in the tutorial
04:56 < LakshyaOjhaGitte> yeah for beginner it can be done , also can you refer to [this](https://arxiv.org/pdf/1603.07285.pdf) for like to explain the working of convolution
04:57 < LakshyaOjhaGitte> what do you say
04:57 < Saksham[m]> Should this be done for every machine learning algorithm?
04:58 < jeffin143[m]> That would be a lot of work Saksham :)
04:58 < jeffin143[m]> Buy if you have the fuel you can definitely
04:58 < LakshyaOjhaGitte> i was thinking of this only, it is good but kind of complex and lot of work
04:58 < Saksham[m]> I can open up a issue , we can let people take up
04:58 < jeffin143[m]> I beleive start a channel and teach mlpack :)
04:58 < jeffin143[m]> I will definitely subscribe it
04:58 < jeffin143[m]> > I can open up a issue , we can let people take up
04:58 < jeffin143[m]> Saksham yeah may be
04:58 < LakshyaOjhaGitte> maybe some guys who want a good first pr can take this ?
04:59 < Saksham[m]> Nice idea
04:59 < LakshyaOjhaGitte> :)
04:59 < Saksham[m]> But should this be done in the mlpack repo?
05:00 < Saksham[m]> I feel we should keep the function definition a little separate from the tutorials
05:18 < LakshyaOjhaGitte> I dont know how the documentation gets changed,a mentor can provide a better insight.
05:20 < Saksham[m]> The documentation is automatically generated form the source code using doxygen if I’m right.
05:20 < Saksham[m]> Not sure tho
05:21 < Saksham[m]> The comments and annotations in the source code
05:22 < birm[m]1> Mostly. There's also ./doc which has some guides not associated with a particular place in source
05:24 < favre49> There is merit to the idea of revamping the "tutorials". I've wanted to do it but haven't gotten around to properly thinking about it
05:25 < favre49> A really liked how comprehensive Theano's tutorial section was
05:25 < chopper_inbound4> I recently opened an issue https://github.com/mlpack/mlpack/issues/2309 regarding documentation.
05:25 < kartikdutt18Gitt> Hey Saksham, I have left a comment about documentation, what I think it should look like. Would love to get everyone's opinion on it. Here is the [link](https://github.com/mlpack/examples/issues/65#issuecomment-601542942). Thanks.
05:27 < favre49> chopper_inbound4: Yeah I saw that, what I was thinking was a complete revamp, including the website. It would be a gigantic project
05:28 < favre49> Like I said, I haven't put enough thought into how I would like it structured. Maybe I'll do it now with this extra time :)
05:28 < kartikdutt18Gitt> > `favre49 on Freenode` Saksham: You're right, but from a marketing perspective it would be great if people could come to our website and learn about CNNs, and proceed to use mlpack for it :)
05:28 < kartikdutt18Gitt> I think a breif about the layer and some reference links to blogs or papers should be fine. What do you think?
05:29 < SriramSKGitter[m> Just a thought, how would the examples repo relate to the tutorials on mlpack.org ? Are they supposed to be complementary or is one going to replace the other?
05:29 < chopper_inbound4> favre49 : agree. We need some web developers.
05:29 < favre49> kartikdutt18Gitt: Actually, I wanted something more like http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html
05:30 < favre49> I think this is also where LakshyaOjhaGitte animations are from :)
05:31 < favre49> It's a lot of work though. May be the kind of project that you would give for Google Summer of Documentation or whatever it's called
05:32 < kartikdutt18Gitt> @sriramsk1999, I think they are supposed to be complimentary. The tutorials on the website are simple code where as examples repo will have really cool projects that we user could look at, run them, understand and change them and so on.
05:32 < jeffin143[m]> favre49: that's a whole gsoc project :)
05:32 < favre49> Yeah, ideally the tutorials would also link to the examples repo
05:32 < favre49> jeffin143[m]: I had thought of it as a gsoc project, but it's too documentation intensive. Doesn't really fit imo
05:33 < favre49> Another gsoc project I had thought of was NLP related but I never got the time to flesh out the idea. I'll probably just co-mentor :)
05:33 < kartikdutt18Gitt> >favre49 on Freenode kartikdutt18 (Gitter): Actually, I wanted something more like http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html
05:33 < kartikdutt18Gitt> I think this is really cool, but this would require a whole website revamp as you mentioned.
05:33 < chopper_inbound[> Maybe some can take it up outside of gsoc.
05:34 < favre49> Yeah, if I ever flesh out the idea properly and decide on a good structure I'll creat a big help wanted issue
05:34 < SriramSKGitter[m> @kartikdutt18 : Yeah that sounds good. Perhaps the tutorials could feature a more in-depth explanation of the API and the examples focus on cool applications :)
05:34 < jeffin143[m]> favre49: NLP is in my to-do lisy
05:34 < chopper_inbound[> No hurry?
05:34 < jeffin143[m]> I have lots of plan
05:34 < jeffin143[m]> I was thinking of putting up a proposal 😂
05:34 < jeffin143[m]> But it was too late
05:35 < jeffin143[m]> favre49 : also implementing new callbacks for ensmallen *
05:36 < kartikdutt18Gitt> >@kartikdutt18 : Yeah that sounds good. Perhaps the tutorials could feature a more in-depth explanation of the API and the examples focus on cool applications :)
05:36 < kartikdutt18Gitt> Agreed. It would be nice to some great projects in it.
05:36 < jeffin143[m]> Like saving models after every epoch , or writting to files
05:36 < favre49> chopper_inbound[: Nah I'm not in a hurry, especially since I have so many competing interests it's hard to choose one
05:38 < jeffin143[m]> Do anybody knows a website where you can do online code challenge with your friend
05:38 < jeffin143[m]> Like 1 v 1 code fight
05:42 < chopper_inbound4> favre49 : haha, that happened to me as well.
06:58 < sreenik[m]> jeffin143[m]: Yeah I used to do it in school, there was a site by the name codefights apparently. I hope it still exists
07:11 < LakshyaOjhaGitte> Hi @sreenik can you give me some insight on attention layer?
07:12 < LakshyaOjhaGitte> is recurrent attention and attention the same thing?
07:14 -!- johnsoncarl[m] [johnsoncar@gateway/shell/matrix.org/x-bwlreciozvjtvgzz] has quit [Ping timeout: 260 seconds]
07:14 -!- SakshamRastogiGi [gitterco6@gateway/shell/matrix.org/x-gghaigtzoumxujvu] has quit [Ping timeout: 260 seconds]
07:14 -!- tejasvi[m] [tstomarmat@gateway/shell/matrix.org/x-bcesohjoptafhegc] has quit [Ping timeout: 260 seconds]
07:14 -!- hemal[m] [hemalmatri@gateway/shell/matrix.org/x-aacunpwmmganuwli] has quit [Ping timeout: 260 seconds]
07:14 -!- TanayMehtaGitter [gittertana@gateway/shell/matrix.org/x-wxstdgmehjubuhlv] has quit [Ping timeout: 260 seconds]
07:15 -!- geek-2002Gitter[ [gitterge5@gateway/shell/matrix.org/x-scyfguqmkaqqghmb] has quit [Ping timeout: 260 seconds]
07:15 -!- EL-SHREIFGitter[ [gitterel-s@gateway/shell/matrix.org/x-zmtouzrhfhpilhwu] has quit [Ping timeout: 260 seconds]
07:15 -!- Shikhar-SGitter[ [gittersh8@gateway/shell/matrix.org/x-wgrdulkazxhmzovx] has quit [Ping timeout: 260 seconds]
07:15 -!- bkb181[m] [bkb181matr@gateway/shell/matrix.org/x-arqklacauaivtsdh] has quit [Ping timeout: 260 seconds]
07:15 -!- GitterIntegratio [gitterbotm@gateway/shell/matrix.org/x-mntcqbfubeacehbl] has quit [Ping timeout: 260 seconds]
07:25 < jenkins-mlpack2> Project docker mlpack nightly build build #647: STILL FAILING in 3 hr 11 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/647/
07:27 -!- johnsoncarl[m] [johnsoncar@gateway/shell/matrix.org/x-vftxyyvuzcnjrcdv] has joined #mlpack
07:27 -!- SakshamRastogiGi [gitterco6@gateway/shell/matrix.org/x-cfhkqczwgcxdvaha] has joined #mlpack
07:27 -!- hemal[m] [hemalmatri@gateway/shell/matrix.org/x-icrjfmvjsnnuiaxq] has joined #mlpack
07:27 -!- tejasvi[m] [tstomarmat@gateway/shell/matrix.org/x-yibktohsegwutcdj] has joined #mlpack
07:28 -!- TanayMehtaGitter [gittertana@gateway/shell/matrix.org/x-rqrkvlwuqjjpzgsv] has joined #mlpack
07:30 -!- bkb181[m] [bkb181matr@gateway/shell/matrix.org/x-bbgggxuiagnyxohf] has joined #mlpack
07:30 -!- Shikhar-SGitter[ [gittersh8@gateway/shell/matrix.org/x-xgmuuviggbafgktt] has joined #mlpack
07:30 -!- EL-SHREIFGitter[ [gitterel-s@gateway/shell/matrix.org/x-weywrcqgjcvyqfae] has joined #mlpack
07:30 -!- geek-2002Gitter[ [gitterge5@gateway/shell/matrix.org/x-rlbrmxfchhmxrgdv] has joined #mlpack
08:08 < sreenik[m]> freenode_gitter_ojhalakshya[m]: Hey I am not quite familiar with the attention layer. I hope someone else helps you out here :)
08:12 < chopper_inbound4> Lakshya Ojha (Gitter) : No, recurrent attention and attention are not the same. I think you are talking about recurrent_attention layer in mlpack.
08:12 < chopper_inbound4> You can refer to https://github.com/mlpack/mlpack/issues/2296
08:25 < chopper_inbound4> This blog can help you through understanding attention.
08:25 < chopper_inbound4> https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html
09:11 < PrinceGuptaGitte> I think there is a fault in implementation of `Padding` layer. I have opened an issue #2318 for it.
09:28 < LakshyaOjhaGitte> oh okay Thanks chopper_inbound
09:28 -!- witness [uid10044@gateway/web/irccloud.com/x-dhtnoqxguvdvdbvq] has joined #mlpack
09:30 < chopper_inbound4> :)
09:33 < naruarjun[m]> Hey
09:33 < naruarjun[m]> I have written a test and added it to tests.
09:33 < naruarjun[m]> I wanted to ask how I can make only that test file and not the entire mlpack_test.
09:33 < naruarjun[m]> ?
09:35 < jeffin143[m]> ./bin/mlpack_test -t testname : naruarjun
09:35 < jeffin143[m]> Testname would be at the start of the test
09:35 < naruarjun[m]> Thanks got it.
09:51 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
12:11 -!- favre49 [~favre49@106.51.30.73] has quit [Ping timeout: 264 seconds]
12:31 -!- favre49 [~favre49@106.51.30.73] has joined #mlpack
12:34 < bisakh[m]> Hi zoq I have a query, at GSOC idea page on 'ESSENTIAL DEEP LEARNING MODULE', I find the proposed work for `WGAN-GP` is already implemented in mlpack under src/methods/ann/gan. Is mlpack focussing on another different implementation?
12:36 < himanshu_pathak[> bisakh: I think It is already there
12:38 < himanshu_pathak[> That was list of ideas but you can whatever module you like
12:39 < himanshu_pathak[> <himanshu_pathak[ "That was list of ideas but you c"> propose
12:40 < bisakh[m]> Himanshu Pathak: yeah sure. so if we talk about the above-mentioned idea, we have to implement the whole idea i.e all submodules at summer, isn't it?
12:43 < bisakh[m]> I like that idea, that's why. I have a fondness towards any kind of adversarial model.
12:45 < himanshu_pathak[> bisakh: Yes you have to implement full module if you are trying to implement new GAN module as well as you have to take care of tests also.
12:46 < himanshu_pathak[> Sorry GAN *submodule
12:55 < bisakh[m]> Thanks himansu. What about the wgan with gradient penalty?
12:59 < AbishaiEbenezerG> Hi mlpack! Regarding the gsoc proposal, since the mentors may not be reviewing our proposals and giving us feedback in the next week, i would like a few pointers on what i should (or should not) include and what mlpack will be looking for in specific...
12:59 < himanshu_pathak[> bisakh: Oh I, see you were talking about both WGAN-GP and PACGAN idea. Firstly WGAN-GP is already completed I don't think their is anything left in that. Now, the PACGAN idea is left if you are familiar with the codebase and understand the paper also I will be a great idea to implement
13:00 < himanshu_pathak[> * bisakh: Oh I, see you were talking about both WGAN-GP and PACGAN idea. Firstly WGAN-GP is already completed I don't think their is anything left in that. Now, the PACGAN idea is left if you are familiar with the codebase and understand the paper also It will be a great idea to implement
13:00 < zoq> AbishaiEbenezerG: Have you seen the application guide: https://github.com/mlpack/mlpack/wiki/Google-Summer-of-Code-Application-Guide
13:01 < AbishaiEbenezerG> oh hi @zoq. I was thinking you were busy as i recently read that you would be busy with something else...
13:01 < AbishaiEbenezerG> Yes, i have read that a few times.. is that all i need to know?
13:02 -!- favre49 [~favre49@106.51.30.73] has quit [Remote host closed the connection]
13:03 < zoq> I'm super busy right now, if everything works out I do have some more time tomorrow.
13:04 < zoq> AbishaiEbenezerG: But the wiki page is a good starting point.
13:09 < rcurtin> AbishaiEbenezerG: remember, mentors can always reach out and ask for clarifications after the proposal deadline has passed
13:10 < AbishaiEbenezerG> oh alright
13:11 < himanshu_pathak[> zoq: I was discussing with rcurtin that if I want to use FFN as a layer I have to define new Bakward() in which I am passing const InputType input as an argument but if someone using FFN as model and pass const InputType input it will cause an error in that case
13:12 < himanshu_pathak[> Should I add a warning for this or you have any different suggestion for this
13:13 < AbishaiEbenezerG> @joeljosephjin why is #2275 closed?
13:13 < zoq> himanshu_pathak[: Maybe using the sequential layer is another option, but a warning sounds fine as well.
13:17 < himanshu_pathak[> zoq: Yes using sequential layer will be a good thing but I also like the idea of using FFN as a layer because may I will be doing same thing with RBF to mplement but there case might be quite different I just want to experiment with this.
13:18 < himanshu_pathak[> *Implement DBN
13:18 < zoq> himanshu_pathak[: I see, we can modify the FFN class as well, if we have to.
13:19 < himanshu_pathak[> zoq: Ok So, I will go with warning approach
13:19 < zoq> Sounds good
13:20 < himanshu_pathak[> Thanks for suggestion
13:21 < Manav-KumarGitte> himanshu_pathak: Hey are you working on your proposal part or some issue/pr which is already opened?
13:44 < himanshu_pathak[> Manav-Kumar (Gitter): That question was related to my pr
13:45 < Manav-KumarGitte> Ok.
13:45 < himanshu_pathak[> Neural Turing Machine Implementation
14:23 -!- travis-ci [~travis-ci@ec2-52-70-76-13.compute-1.amazonaws.com] has joined #mlpack
14:23 < travis-ci> shrit/models#14 (digit - 3f184ca : Omar Shrit): The build is still failing.
14:23 < travis-ci> Change view : https://github.com/shrit/models/compare/d273f4772f23...3f184ca605d4
14:23 < travis-ci> Build details : https://travis-ci.com/shrit/models/builds/154262845
14:23 -!- travis-ci [~travis-ci@ec2-52-70-76-13.compute-1.amazonaws.com] has left #mlpack []
14:42 < AbishaiEbenezerG> where can i find documentation on the tests?
14:46 -!- drock [2f1faace@47.31.170.206] has joined #mlpack
14:54 -!- drock [2f1faace@47.31.170.206] has quit [Remote host closed the connection]
15:17 < AnjishnuGitter[m> I was looking through the loss functions in mlpack and couldn’t find Smooth L1 Loss. I intend to start working on it probably by tomorrow. Let me know if I am missing something and it is actually implemented somewhere. Thanks.
15:18 < Saksham[m]> Smooth L1 is implemented if I remember correctly
15:20 < Saksham[m]> <https://github.com/mlpack/mlpack/pull/2199>
15:23 < AnjishnuGitter[m> I see.. thanks for that. I didn’t actually know the name Huber Loss, which is why the confusion
15:24 < Saksham[m]> hahah happened to me too, i actually opened a PR for this after implementing it only to find this out later. xD
15:25 < AnjishnuGitter[m> XD. Also, could you actually help me out with one more thing. I was making a list of some stuff I want to implement, but which I couldn’t find in mlpack. Is any of the following already implemented?
15:25 * AnjishnuGitter[m sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/FnMKvNIJcxDHHcqnTwLjvuAS >
15:26 < Saksham[m]> <https://github.com/mlpack/mlpack/issues/2200>
15:26 < Saksham[m]> This might be helpful
15:27 < AnjishnuGitter[m> Okay. Thanks. I will have a look through that.
15:28 < Saksham[m]> or just go through the loss functions folder inside the mlpack
15:29 < AnjishnuGitter[m> 😅 yeah. that’s what i was doing initially. But then stuff like Huber came up, which I haven’t heard of before.
15:30 < Saksham[m]> sometimes i do a search on PRs before implementing. It is helpful sometimes…
15:31 < Saksham[m]> Anyway happy to help if you have doubts
15:33 < AnjishnuGitter[m> One more thing. I notice that gaurav singh mentioned on #2200 that he wanted to work on Multi Label Margin Loss back on Feb 11. I dont see a pr from him referencing this issue yet. So, should I assume that he is still working on it, or do I assume that it is not implemented? This is a specific description, but I have noticed some situations like this with some other PRs also. What do you do in such cases?
15:34 < Saksham[m]> Just tag him in the issue (#2200) and ask if he’s still working on it.
15:34 < Saksham[m]> add a comment regarding it and tag him in it
15:35 < AnjishnuGitter[m> I see. Okay👌
15:36 < AnjishnuGitter[m> Thanks so much for your help!
15:39 -!- travis-ci [~travis-ci@ec2-3-81-92-166.compute-1.amazonaws.com] has joined #mlpack
15:39 < travis-ci> shrit/models#15 (digit - 45a72c1 : Omar Shrit): The build is still failing.
15:39 < travis-ci> Change view : https://github.com/shrit/models/compare/3f184ca605d4...45a72c1159ce
15:39 < travis-ci> Build details : https://travis-ci.com/shrit/models/builds/154275818
15:39 -!- travis-ci [~travis-ci@ec2-3-81-92-166.compute-1.amazonaws.com] has left #mlpack []
15:41 < Saksham[m]> 🙂
16:13 -!- travis-ci [~travis-ci@ec2-3-86-222-225.compute-1.amazonaws.com] has joined #mlpack
16:13 < travis-ci> shrit/models#16 (digit - 6332beb : Omar Shrit): The build has errored.
16:13 < travis-ci> Change view : https://github.com/shrit/examples/compare/45a72c1159ce...6332bebf8585
16:13 < travis-ci> Build details : https://travis-ci.com/shrit/models/builds/154282509
16:13 -!- travis-ci [~travis-ci@ec2-3-86-222-225.compute-1.amazonaws.com] has left #mlpack []
16:51 -!- travis-ci [~travis-ci@ec2-35-175-146-82.compute-1.amazonaws.com] has joined #mlpack
16:51 < travis-ci> shrit/models#17 (digit - 7e1120d : Omar Shrit): The build has errored.
16:51 < travis-ci> Change view : https://github.com/shrit/examples/compare/6332bebf8585...7e1120d49b44
16:51 < travis-ci> Build details : https://travis-ci.com/shrit/models/builds/154288547
16:51 -!- travis-ci [~travis-ci@ec2-35-175-146-82.compute-1.amazonaws.com] has left #mlpack []
16:59 < metahost> rcurtin, zoq: I have shared a draft proposal via the application portal, will you have time to look at it? :)
18:34 < JoelJosephGitter> @abinezer i closed the issue because it was not an issue. I only trained it for 100 episodes n it did not change its test average, but when i ran it for a thousand episodes, it did work https://ibb.co/tXNrVJj
18:52 -!- RishabhGoel[m] [slackml_36@gateway/shell/matrix.org/x-wfedjljywruwtdil] has joined #mlpack
18:52 < RishabhGoel[m]> 💃 Just arrived! Trying to familiarize myself before applying for GSoC.
19:01 < SriramSKGitter[m> @nishantkr18 : No, they are just suggestions :)
19:01 < JoelJosephGitter> @abinezer i don't think there is a detailed documentation for the tests yet, but see if the gist in here can help https://medium.com/@joeljosephjin/asynchronous-deep-reinforcement-learning-with-mlpack-140ee573a235 . :) what is the difficulty that u said u faced with the code for dqn on mountain car environment?
19:01 -!- eadwu[m] [eadwumatri@gateway/shell/matrix.org/x-arvkfiozweibjqgx] has left #mlpack ["User left"]
19:06 < JoelJosephGitter> i usually just copy the code between the BOOST_AUTO_TEST_CASE(whateveralgorithm){ /code here/ }, change the "Log::Debug" to "std::cout", and paste it into the int main{ }, it works
19:18 -!- NishantKumarGitt [gittern_18@gateway/shell/matrix.org/x-foyahmmhvliyolki] has joined #mlpack
22:13 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Read error: Connection reset by peer]
23:02 -!- witness [uid10044@gateway/web/irccloud.com/x-dhtnoqxguvdvdbvq] has quit [Quit: Connection closed for inactivity]
23:25 -!- mlozhnikov[m]1 [lozhnikovm@gateway/shell/matrix.org/x-drgrvfjxqjyokahm] has joined #mlpack
23:25 -!- mlozhnikov[m] [lozhnikovm@gateway/shell/matrix.org/x-fulklkklyoyoiign] has quit [Ping timeout: 246 seconds]
23:36 -!- bisakh[m]1 [slackml28@gateway/shell/matrix.org/x-onmbhutqebovnygm] has joined #mlpack
23:37 -!- bisakh[m] [slackml28@gateway/shell/matrix.org/x-eqzuyiehadehwcot] has quit [Ping timeout: 246 seconds]
--- Log closed Sat Mar 21 00:00:37 2020