mlpack IRC logs, 2018-05-30

Logs for the day 2018-05-30 (starts at 0:00 UTC) are shown below.

May 2018
Sun
Mon
Tue
Wed
Thu
Fri
Sat
 
 
1
2
3
4
5
6
7
8
9
10
11
26
27
28
29
30
31
--- Log opened Wed May 30 00:00:41 2018
00:05 -!- Guest63658 [sid227710@gateway/web/irccloud.com/x-xaiyjegajmsnniqg] has quit [Ping timeout: 265 seconds]
00:05 -!- Guest63658 [sid227710@gateway/web/irccloud.com/x-zfndnkovjknehxwh] has joined #mlpack
00:37 -!- prakhar_code[m] [prakharcod@gateway/shell/matrix.org/x-wdnmwemdecusxyac] has quit [Ping timeout: 240 seconds]
00:37 -!- killer_bee[m] [killerbeem@gateway/shell/matrix.org/x-ypbcytcezxolxbau] has quit [Ping timeout: 245 seconds]
01:05 -!- prakhar_code[m] [prakharcod@gateway/shell/matrix.org/x-hmxeldthgdhjtdwx] has joined #mlpack
01:40 -!- killer_bee[m] [killerbeem@gateway/shell/matrix.org/x-whrntpjvjvaehyou] has joined #mlpack
02:20 -!- sumedhghaisas [~yaaic@27.4.20.166] has quit [Ping timeout: 255 seconds]
03:00 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
05:53 < ShikharJ> rcurtin: Actually I'm trying to plot the GAN output to see for myself.
08:47 < Atharva> I figured it out. Can somebody confirm if this is correct. The delta matrix for linear layer is actually (weight.T * error) % derivative of the activation function, but in mlpack's implementation of anns, the activation functions are different layer objects so the delta matrix just becomes weight.T * error.
09:07 -!- witness_ [uid10044@gateway/web/irccloud.com/x-smarjtpaswqoslpr] has joined #mlpack
12:32 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 256 seconds]
13:51 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
14:41 < rcurtin> ShikharJ: ah, ok, in that case the preprocess_split way may not be the best way to go :)
14:41 < rcurtin> Atharva: I think that is correct, but I am not 100% sure, maybe zoq can confirm
15:57 -!- witness_ [uid10044@gateway/web/irccloud.com/x-smarjtpaswqoslpr] has quit [Quit: Connection closed for inactivity]
17:03 -!- sumedhghaisas [~yaaic@42.107.18.183] has joined #mlpack
17:05 -!- sumedhghaisas2 [~yaaic@42.107.0.223] has joined #mlpack
17:07 -!- sumedhghaisas [~yaaic@42.107.18.183] has quit [Ping timeout: 268 seconds]
17:07 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Read error: Connection reset by peer]
17:08 -!- manish7294 [8ba70073@gateway/web/freenode/ip.139.167.0.115] has joined #mlpack
17:10 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
17:10 -!- manish7294_ [8ba767b7@gateway/web/freenode/ip.139.167.103.183] has joined #mlpack
17:11 < manish7294_> rcurtin: Thanks, Hopefully updating gradient totally solved the problem. Here is a result - https://pasteboard.co/HnB3o5x.png
17:12 -!- manish7294 [8ba70073@gateway/web/freenode/ip.139.167.0.115] has quit [Ping timeout: 260 seconds]
17:15 < ShikharJ> lozhnikov: You there?
17:16 -!- manish7294_ [8ba767b7@gateway/web/freenode/ip.139.167.103.183] has quit [Ping timeout: 260 seconds]
17:17 -!- sumedhghaisas2 [~yaaic@42.107.0.223] has quit [Ping timeout: 240 seconds]
17:17 -!- sumedhghaisas [~yaaic@42.107.2.74] has joined #mlpack
17:22 < rcurtin> manish7294: looks good, that is with SGD?
17:26 -!- manish7294 [8ba73317@gateway/web/freenode/ip.139.167.51.23] has joined #mlpack
17:26 < manish7294> rcurtin: yes :)
17:26 -!- sumedhghaisas2 [~yaaic@42.107.0.31] has joined #mlpack
17:26 -!- sumedhghaisas [~yaaic@42.107.2.74] has quit [Ping timeout: 240 seconds]
17:28 < rcurtin> great---I guess you are trying now with some larger datasets? if those work, I think maybe we should get some basic benchmarking times, then we can see how we can accelerate the algorithm
17:28 < rcurtin> I have some ideas for avoiding the impostor recalculation
17:28 < rcurtin> I will have to write them down and think about it though
17:30 < ShikharJ> zoq: You there?
17:30 -!- manish7294 [8ba73317@gateway/web/freenode/ip.139.167.51.23] has quit [Ping timeout: 260 seconds]
17:30 -!- sumedhghaisas2 [~yaaic@42.107.0.31] has quit [Ping timeout: 256 seconds]
17:31 -!- manish7294 [8ba7d1b3@gateway/web/freenode/ip.139.167.209.179] has joined #mlpack
17:32 < manish7294> rcurtin: It would be great if we can reduce the cost of impostors recalculation.
17:32 < rcurtin> right, so there are a couple of approaches that we could use together
17:33 < rcurtin> the first is that, if we know the distance to the k+1'th impostor, we can place a bound on how much closer that impostor can get each iteration
17:33 < rcurtin> I haven't derived the bound, but we can say that if the matrix did not change too much, the impostors will all be the same, so there is no need to recalculate
17:33 < manish7294> rcurtin: Currently I have also tried with iris. rcurtin And final objective seems preety good
17:33 < rcurtin> another acceleration possibility is to only recalculate impostors for those points in the dataset where the impostors could have changed
17:34 < rcurtin> those two ideas could probably be combined
17:34 < rcurtin> a third possibility, which is an approximation, is to only recalculate impostors every N iterations of the optimization for some N
17:35 < rcurtin> there are lots of possible ideas, so I am not too worried about being able to get some speedup in the end
17:35 < rcurtin> just keep in mind, if you are thinking about MNIST, that nearest neighbor search is going to be slow for that dataset almost no matter what because it is so high dimensional
17:37 -!- manish7294 [8ba7d1b3@gateway/web/freenode/ip.139.167.209.179] has quit [Ping timeout: 260 seconds]
17:38 -!- manish7294 [8ba7b84e@gateway/web/freenode/ip.139.167.184.78] has joined #mlpack
17:38 < manish7294> rcurtin: Everything sounds good :)
17:40 -!- sumedhghaisas [~yaaic@42.107.13.190] has joined #mlpack
17:43 -!- manish7294 [8ba7b84e@gateway/web/freenode/ip.139.167.184.78] has quit [Ping timeout: 260 seconds]
17:43 < zoq> ShikharJ: Yeah.
17:43 < zoq> Atharva: That is correct.
17:49 < ShikharJ> zoq: I was wondering why in the GAN implementation we're just training the Generator on a single noise input (columns = 1), and not batch-wise (columns = batchSize)?
17:57 -!- travis-ci [~travis-ci@ec2-23-22-22-112.compute-1.amazonaws.com] has joined #mlpack
17:57 < travis-ci> manish7294/mlpack#12 (lmnn - 70680c7 : Manish): The build was broken.
17:57 < travis-ci> Change view : https://github.com/manish7294/mlpack/compare/5fb86e879089...70680c7ab7fc
17:57 < travis-ci> Build details : https://travis-ci.com/manish7294/mlpack/builds/74893257
17:57 -!- travis-ci [~travis-ci@ec2-23-22-22-112.compute-1.amazonaws.com] has left #mlpack []
17:57 < zoq> ShikharJ: I guess, since Kris created the PR, some things changed, like batch support for the conv layer, I think you worked on that part, so I agree adding batch support is something we should add.
18:00 < Atharva> zoq: Thanks for the confirmation.
18:00 -!- sumedhghaisas2 [~yaaic@42.107.5.235] has joined #mlpack
18:02 -!- sumedhghaisas [~yaaic@42.107.13.190] has quit [Ping timeout: 240 seconds]
18:06 < zoq> ShikharJ: If you need a system to run the code on for hours and hours, we could perhaps use one of the benchmark systems.
18:14 < ShikharJ> zoq: I think I found the reason behind that strategy here (https://github.com/mlpack/mlpack/pull/1066#issuecomment-322114951).
18:22 < ShikharJ> zoq: I didn't specifically work on batch support for CNNs, they just take a single input at one time, since we need them to take an input in a 2d matrix form and not as individual columns. I believe my concern is superficial here because of the pipeline that lozhnikov has created.
18:23 < ShikharJ> I'm guessing batch support was already there at Kris' time, since the discriminator does take batch based inputs, though it faces an error that I'm trying to fix.
18:25 < rcurtin> ShikharJ: I am not sure if this is a helpful comment that addresses what you are talking about, but I believe that batch support was added to the ANN framework after Kris's project, with the merge of #1137, which was in October 2017
18:26 < rcurtin> however, if I remember right, that was mostly a change to the optimizers themselves, not to the ANN framework... maybe there were minor changes there
18:32 < ShikharJ> rcurtin: I'll dig into this, thanks!
18:32 -!- sumedhghaisas2 [~yaaic@42.107.5.235] has quit [Ping timeout: 240 seconds]
18:32 < rcurtin> I'm not sure how useful looking through #1137 is, mostly I just wanted to point out that at the time of Kris's code, it would have been reasonable if he implemented it in such a way that he was only considering batches of size one
18:32 < rcurtin> but if it is helpful I am glad to have shared it :)
18:33 < zoq> If it works for batch size = 1 for now, thats fine we can work on this part later; if you like I can implement that part
18:34 < ShikharJ> rcurtin: It did help :)
18:35 < rcurtin> :)
18:35 < ShikharJ> zoq: Sure, we just need to check for code correctness for now and we can worry about the batch sizes for later. However, it does seem to be an interesting problem to solve :)
18:36 < zoq> agreed, I'll take another look into gradient step later today.
18:37 < ShikharJ> zoq: Also with lozhnikov's approach of a single noise input, the generator network doesn't need to have batch normalization layer, since the input is singular, so that means lesser computation.
18:38 < ShikharJ> zoq: Let's just hope we can find some good results on parameters. What about the benchmark systems?
18:38 < zoq> right, which is good for testing
18:40 < zoq> if you need a system to run the code?
18:41 < ShikharJ> Yes, are the online computing instances?
18:42 < ShikharJ> I have no clue regarding what benchmark systems are in mlpack?
18:44 < ShikharJ> zoq: Could you tell me more?
18:46 -!- sumedhghaisas [~yaaic@42.107.6.240] has joined #mlpack
18:49 < zoq> rcurtin: can we use one of the benchmark systems?
18:50 < rcurtin> sure, I think only I am allowed to have root on them because they are Symantec owned, but I can definitely create an account on one of them
18:50 < rcurtin> ShikharJ: basically, Symantec provides some number of build systems for us, and we have 5 systems that we use to benchmark mlpack through the benchmarks system: https://github.com/mlpack/benchmarks/
18:50 < rcurtin> but these systems are useful also for long-running jobs that might happen during GSoC
18:51 < rcurtin> zoq: how about savannah.mlpack.org?
18:51 < rcurtin> ShikharJ: let me know what username you like, then I'll get the account set up and PM you the credentials
18:51 < ShikharJ> rcurtin: Amazing! ShikharJ would be a good username :P Thanks for the help!
19:00 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
19:03 -!- sumedhghaisas [~yaaic@42.107.6.240] has quit [Ping timeout: 240 seconds]
19:12 -!- sumedhghaisas [~yaaic@2402:3a80:67c:8f3:e8d9:5916:304b:dabf] has joined #mlpack
19:22 -!- sumedhghaisas2 [~yaaic@2402:3a80:641:3b0d:c54e:9fcb:5ddc:ca8c] has joined #mlpack
19:24 -!- sumedhghaisas [~yaaic@2402:3a80:67c:8f3:e8d9:5916:304b:dabf] has quit [Ping timeout: 260 seconds]
19:25 < zoq> yeah, savannah works great
19:32 -!- sumedhghaisas2 [~yaaic@2402:3a80:641:3b0d:c54e:9fcb:5ddc:ca8c] has quit [Ping timeout: 276 seconds]
19:38 -!- sumedhghaisas [~yaaic@2402:3a80:66c:26b:ab0f:1e2:689:4622] has joined #mlpack
19:42 -!- sumedhghaisas2 [~yaaic@2402:3a80:664:858d:aa4:c5ca:c522:74bd] has joined #mlpack
19:43 -!- sumedhghaisas [~yaaic@2402:3a80:66c:26b:ab0f:1e2:689:4622] has quit [Ping timeout: 255 seconds]
20:21 -!- sumedhghaisas2 [~yaaic@2402:3a80:664:858d:aa4:c5ca:c522:74bd] has quit [Ping timeout: 240 seconds]
20:21 -!- sumedhghaisas [~yaaic@27.4.20.166] has joined #mlpack
20:26 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Read error: Connection reset by peer]
20:35 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
20:35 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Client Quit]
20:57 -!- witness_ [uid10044@gateway/web/irccloud.com/x-koiqjutynwflbjuq] has joined #mlpack
21:06 -!- travis-ci [~travis-ci@ec2-54-224-19-64.compute-1.amazonaws.com] has joined #mlpack
21:06 < travis-ci> ShikharJ/mlpack#166 (GAN - 01316ea : Shikhar Jaiswal): The build has errored.
21:06 < travis-ci> Change view : https://github.com/ShikharJ/mlpack/compare/43eb8fce4e1b...01316eae673f
21:06 < travis-ci> Build details : https://travis-ci.org/ShikharJ/mlpack/builds/385872513
21:06 -!- travis-ci [~travis-ci@ec2-54-224-19-64.compute-1.amazonaws.com] has left #mlpack []
21:33 -!- sumedhghaisas2 [~yaaic@42.107.6.171] has joined #mlpack
21:33 -!- sumedhghaisas [~yaaic@27.4.20.166] has quit [Ping timeout: 244 seconds]
21:36 -!- sumedhghaisas [~yaaic@2402:3a80:69a:c34d:2c3f:ee3d:8db2:2ec5] has joined #mlpack
21:37 -!- sumedhghaisas2 [~yaaic@42.107.6.171] has quit [Ping timeout: 256 seconds]
21:40 -!- sumedhghaisas2 [~yaaic@2402:3a80:647:ce69:30d2:9153:9237:febe] has joined #mlpack
21:40 -!- sumedhghaisas [~yaaic@2402:3a80:69a:c34d:2c3f:ee3d:8db2:2ec5] has quit [Ping timeout: 240 seconds]
21:41 -!- sumedhghaisas [~yaaic@42.107.5.49] has joined #mlpack
21:44 -!- sumedhghaisas2 [~yaaic@2402:3a80:647:ce69:30d2:9153:9237:febe] has quit [Ping timeout: 240 seconds]
21:52 -!- sumedhghaisas [~yaaic@42.107.5.49] has quit [Ping timeout: 248 seconds]
21:54 -!- sumedhghaisas [~yaaic@42.107.6.145] has joined #mlpack
22:01 -!- sumedhghaisas [~yaaic@42.107.6.145] has quit [Ping timeout: 244 seconds]
22:03 -!- sumedhghaisas [~yaaic@2402:3a80:68e:f515:c8c5:f72a:4603:48d4] has joined #mlpack
22:16 -!- sumedhghaisas [~yaaic@2402:3a80:68e:f515:c8c5:f72a:4603:48d4] has quit [Read error: Connection reset by peer]
22:17 -!- sumedhghaisas [~yaaic@27.4.20.166] has joined #mlpack
--- Log closed Thu May 31 00:00:43 2018