mlpack IRC logs, 2020-04-03

Logs for the day 2020-04-03 (starts at 0:00 UTC) are shown below.

April 2020
Sun
Mon
Tue
Wed
Thu
Fri
Sat
 
 
 
1
2
3
4
5
6
7
8
9
10
--- Log opened Fri Apr 03 00:00:56 2020
03:52 < jenkins-mlpack2> Project docker mlpack weekly build build #101: UNSTABLE in 6 hr 5 min: http://ci.mlpack.org/job/docker%20mlpack%20weekly%20build/101/
05:04 < chopper_inbound4> hey sreenik[m] are you working in #1958 😉 I mean PR https://github.com/mlpack/mlpack/pull/1958 or if you are not, then someone else can take this up. Because this is something important.
05:18 -!- favre49 [~favre49@49.207.58.161] has joined #mlpack
05:21 < favre49> zoq: Hey, I wanted to get NEAT merged soon since I have free time during the lockdown. In the comments, you said that we would have to implement "titles support" to be able to test it on the super mario world env. What did you mean by this? Is this something I could do, or that I need to do before we merge this?
05:23 < favre49> I just merged to master, and it's pretty amazing the number of additions we've had in 8 months - 592 files changed, 31457 insertions(+), 6127 deletions(-)
07:11 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
07:32 < sreenik[m]> freenode_chopper_inbound[m]: Feel free to take it up
07:34 < chopper_inbound4> sure :)
07:34 < jenkins-mlpack2> Project docker mlpack nightly build build #661: STILL UNSTABLE in 3 hr 20 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/661/
08:35 -!- togo [~togo@2a02:6d40:34f8:8901:983d:63b6:69cb:91f] has joined #mlpack
11:18 < zoq> favre49: I really like to test NEAT on a more complex env, like mario, unfortunately, the gym mario env uses pixels and not tiles for the input, so the input size is large. So training is super slow. Maybe they have added tiles support.
11:20 -!- favre49 [~favre49@49.207.58.161] has quit [Remote host closed the connection]
11:49 < chopper_inbound4> Hi rcurtin zoq , I think ann/layer/layer.hpp aims to provide a way to add any layer(s) to model by including only this file but not all layers are "included" in here. Any reasons?
12:05 < AbishaiEbenezerG> I would like to know what is the difference btwn return and reward. I tried searching for some material for what return means and i don't quite undertsand
12:06 < AbishaiEbenezerG> if someone could help me understand what exactly return is, or give me some material to understand it, that would be really awesome
12:12 < zoq> AbishaiEbenezerG: Yesterday I used the term return because you used it, for me it's the same thing in the rf env context.
12:12 < zoq> chopper_inbound4: Maybe we missed to add some of the layers.
12:12 < zoq> chopper_inbound4: No specific reason.
12:13 < chopper_inbound4> ohh... thanks zoq .
12:14 < chopper_inbound4> should I open a PR to suffice that
12:14 < chopper_inbound4> ?
12:15 < zoq> chopper_inbound4: Please feel free.
12:16 < chopper_inbound4> ok :)
12:59 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Ping timeout: 246 seconds]
13:10 < AbishaiEbenezerG> MountainCar-v0 is considered "solved" when the agent obtains an average reward of at least -110.0 over 100 consecutive episodes.
13:11 < AbishaiEbenezerG> i found the above line on the openai site
13:13 < AbishaiEbenezerG> my question is - in the implementation of mountain car in q_learning_test , the average return starts at -400 and increases to -362 odd. Why isn't it closer to -110??
13:14 < AbishaiEbenezerG> does that mean that our implementation of q_learning isn't good enough?
13:14 < naruarjun[m]> <NishantKumarGitt "Yeah, same here.. I m also not s"> I think this could be helped with the episodic replay PR
13:15 < AbishaiEbenezerG> i understand that q_learning was able to improve it to -362
13:15 < AbishaiEbenezerG> but i'm not sure if thats good enough cuz -362 is not very close to -110
13:16 < AbishaiEbenezerG> i most probably am missing something here @zoq
13:17 < naruarjun[m]> <AbishaiEbenezerG "i understand that q_learning was"> I think that is because it is just a test. If the agent not able to improve till -362 by 1000 episodes then there is something definitely wrong. You would see that when you run the test, it stops way before 1000 episodes are completed.
13:18 < naruarjun[m]> * > <@gitter_abishaiema:matrix.org> i understand that q_learning was able to improve it to -362
13:18 < naruarjun[m]> I think that is because it is just a test. If the agent is not able to improve till -362 by 1000 episodes then there is something definitely wrong. You would see that when you run the test, it stops way before 1000 episodes are completed.
13:18 < AbishaiEbenezerG> and also, by setting config.DoubleQLearning() = true; one of the episode return was -141
13:18 < AbishaiEbenezerG> so i'm kinda confused...
13:19 < AbishaiEbenezerG> but its improved only a bit...
13:20 * AbishaiEbenezerG sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/fhvfPRePXzpoTovakmLbzlbU >
13:20 < zoq> AbishaiEbenezerG: It's just a simple test, so don't expect good results.
13:21 < AbishaiEbenezerG> @zoq, so if i want to use mlpack's q_learning implementation to solve this
13:21 < AbishaiEbenezerG> how do i do that?
13:22 < zoq> AbishaiEbenezerG: I guess an easy step would be to increase the number of iterations.
13:28 < rcurtin> jeffin143[m]: I'm ready to merge #2019 if you can handle the last issue with the Julia bindings; I debugged it last night and posted a comment about how to fix it :)
13:28 < rcurtin> basically I just want to see the builds pass with the Julia binding and I think it's good to go :)
13:46 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
14:15 * NishantKumarGitt sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/CkdyqRTQhjPAlziIKfcZDBaI >
16:08 < JoelJosephGitter> @abinezer have you tried tweaking the FFN's hyperparameters such as stepsize,no. Of layers,etc...
16:17 -!- favre49 [~favre49@49.207.58.161] has joined #mlpack
16:20 < favre49> zoq[m]: Judging by the time of the last commit, it doesn't appear it has tiles support. I'll try to find some other options from papers and the like. DO you have anything else in mind?
16:31 -!- travis-ci [~travis-ci@ec2-52-72-171-110.compute-1.amazonaws.com] has joined #mlpack
16:31 < travis-ci> shrit/ensmallen#12 (early_stopping - eb06770 : Omar Shrit): The build has errored.
16:31 < travis-ci> Change view : https://github.com/shrit/ensmallen/compare/89f794768fb8...eb06770774e3
16:31 < travis-ci> Build details : https://travis-ci.com/shrit/ensmallen/builds/158255991
16:31 -!- travis-ci [~travis-ci@ec2-52-72-171-110.compute-1.amazonaws.com] has left #mlpack []
16:34 -!- favre49 [~favre49@49.207.58.161] has quit [Quit: Lost terminal]
19:40 < himanshu_pathak[> Hey rcurtin: I tried to implement test suggested by you for copy constructor When I am trying delete object pointer but I am getting an error double free or corruption (out) can you suggest what may be the cause I think because of weights matrix.
19:57 < rcurtin> it sounds like the test is doing its job of finding bad memory usage :-D
19:58 < rcurtin> I'm not sure exactly what would be going wrong there, maybe it's worth checking the destructor of FFN and seeing what's being freed twice
19:58 < rcurtin> you can use valgrind for that, and another idea is one I sometimes do,
19:58 < rcurtin> where I add this line to the destructor of all relevant objects:
19:58 < rcurtin> std::cout << "destructing <class name>, object at " << this << "\n";
19:58 < rcurtin> that will print what's being destructed (assuming you substitute <class name> right ;)) and what the memory location is
19:59 < rcurtin> so then you can see what exactly is being deleted twice; you just need to look for the same memory location appearing on two lines :)
20:00 < himanshu_pathak[> > std::cout << "destructing <class name>, object at " << this << "\n";
20:00 < himanshu_pathak[> I will try it was not working I was not copying just deleting
20:02 < himanshu_pathak[> I will try it after reaching to a conclusion something wrong or not I will ask thanks for suggestion rcurtin: may be I am missing something
20:07 < himanshu_pathak[> * I will try it after reaching to a conclusion something wrong or not I will ask again thanks for suggestion rcurtin: may be I am missing something
20:08 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
20:19 < himanshu_pathak[> * > std::cout << "destructing <class name>, object at " << this << "\n";
20:19 < himanshu_pathak[> I will try it. That was not working when I was not copying just deleting
20:19 < rcurtin> if you found another bug while trying to solve a different one, probably still worth debugging and fixing :)
20:32 < himanshu_pathak[> rcurtin: No I was wrongly assigning pointers now it is working thanks for your suggestion that helped
21:02 < rcurtin> sure, happy to help :)
21:30 < NishantKumarGitt> @zoq hey! Could u provide comments on https ://github.com/mlpack/mlpack/pull/2317 ? I guess I've made the necessary changes :)
21:33 < zoq> NishantKumarGitt: It's on my list, probably tomorrow.
21:35 < NishantKumarGitt> Sure 👍
21:41 < zoq> kartikdutt18Gitt: I can at least reproduce the issue in the notebook.
21:43 < himanshu_pathak[> Hey rcurtin: zoq I have tested according to the test given rcurtin and It is working with linear layer without doing any change in code. What do you suggest I think we don't need to call ResetParameter in copy constructor.
21:46 < himanshu_pathak[> also when I call Reset() in copy constructor of linear layer it is causing error because every time ForwardVistor is Reset is called multiple times.
21:47 < himanshu_pathak[> * Hey rcurtin: zoq I have tested according to the test given rcurtin and It is working with linear layer without doing any change in code. What do you suggest I think we don't need to call ResetParameter in copy constructor of FFN.
21:47 < himanshu_pathak[> * also when I call Reset() in copy constructor of linear layer it is causing error because every time ForwardVistor is used is Reset is called multiple times.
21:50 < zoq> himanshu_pathak[: Hm, I think you have to call ResetParameters(), I could be wrong; you could test if it works right by changing the models paramameters e.g. using model.Parameters()[0] = 1 and see if layer parameter changed as well.
21:51 < himanshu_pathak[> zoq: Ok I will try that.
21:52 < himanshu_pathak[> Thanks for suggestion
21:53 < rcurtin> himanshu_pathak[: be sure to also test with the CXXFLAGS specified in the original bug report :)
21:53 < rcurtin> (those are the ones that actually expose the error)
21:53 < himanshu_pathak[> -fsanitize=address
21:53 < himanshu_pathak[> this one
21:55 < rcurtin> yep :)
21:56 < himanshu_pathak[> Ok I will try all things today and update my pr tomorrow after doing all necessary changes
22:31 -!- togo [~togo@2a02:6d40:34f8:8901:983d:63b6:69cb:91f] has quit [Quit: Leaving]
22:37 < rcurtin> himanshu_pathak[: no hurry, I expect that it will take a long time to get everything right with this one :)
22:46 < himanshu_pathak[> rcurtin: Yes I check every where CNN and GAN etc . I checked with -fsanitize=address no dangling pointer issue with linear layer FFN now it's pretty late for me. So, I will try to catch tomorrow thanks for helping going to bed
22:47 < himanshu_pathak[> * rcurtin: Yes I have to check every where CNN and GAN etc . I checked with -fsanitize=address no dangling pointer issue with linear layer FFN now it's pretty late for me. So, I will try to catch up tomorrow thanks for helping going to bed now
--- Log closed Sat Apr 04 00:00:58 2020