mlpack IRC logs, 2018-06-21

Logs for the day 2018-06-21 (starts at 0:00 UTC) are shown below.

>
June 2018
Sun
Mon
Tue
Wed
Thu
Fri
Sat
 
 
 
 
 
1
2
3
4
5
6
7
8
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
--- Log opened Thu Jun 21 00:00:13 2018
02:24 -!- seeni [dce18d19@gateway/web/freenode/ip.220.225.141.25] has joined #mlpack
02:26 < seeni> can you say why this thing happpens while building mlpack " from Cython.Distutils import build_ext ModuleNotFoundError: No module named 'Cython' " . But i have Cython installed on my machine
02:26 -!- seeni_ [~seeni@220.225.141.25] has joined #mlpack
02:29 -!- seeni [dce18d19@gateway/web/freenode/ip.220.225.141.25] has quit [Quit: Page closed]
02:29 -!- seeni_ is now known as seeni
02:53 -!- seeni [~seeni@220.225.141.25] has quit [Quit: seeni]
03:21 < rcurtin> seeni: do you have Cython installed for the correct version of python?
03:21 < rcurtin> and which version is installed?
04:07 -!- manish7294 [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has joined #mlpack
04:07 < manish7294> rcurtin: Probably it's late, Are you there?
04:13 < rcurtin> yeah, I am about to go to bed though, but I can stay up for a few more minutes :)
04:14 -!- manish7294_ [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has joined #mlpack
04:14 < manish7294_> rcurtin: Its regarding distance caching in impostors.
04:15 < manish7294_> Do you mean the distance matrix we pass to knn search
04:15 < rcurtin> right, there are a couple little complexities there
04:15 < manish7294_> this right knn.Search(k, neighbors, distances);
04:15 < rcurtin> but yes, when we do knn.Search(), it returns the distances between the point and its nearest neighbors in that matrix
04:15 < manish7294_> ?
04:15 < rcurtin> right, exactly
04:16 < rcurtin> if we cache the distance results, we can avoid the recalculation, does that make sense?
04:16 < manish7294_> But I saw knn seach code and it reinitialize distance every time.
04:16 < manish7294_> If I got it right here it is
04:16 < manish7294_> arma::Mat<size_t>* neighborPtr = &neighbors; arma::mat* distancePtr = &distances; if (!oldFromNewReferences.empty() && tree::TreeTraits<Tree>::RearrangesDataset) { // We will always need to rearrange in this case. distancePtr = new arma::mat; neighborPtr = new arma::Mat<size_t>; } // Initialize results. neighborPtr->set_size(k, referenceSet->n_cols); distancePtr->set_size(k, referenceSet->n_col
04:16 < rcurtin> right, and the same with the neighbors matrix
04:17 < manish7294_> Ah! indentation!
04:17 -!- manish7294 [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has quit [Ping timeout: 260 seconds]
04:17 < rcurtin> but in Impostors() you are extracting the results of that neighbors matrix into the outputMatrix object
04:17 < manish7294_> Right
04:17 < rcurtin> no worries, I know the code you are talking about :)
04:18 < rcurtin> so the idea would be, also extract the distances into some other output matrix
04:18 < manish7294_> but if knn reinitialize distance everytime, so how would it help
04:18 < rcurtin> and then they can be used by the other parts of EvaluateWithGradient()
04:18 < manish7294_> Right, I got that idea but worrying about knn search code
04:19 < rcurtin> yeah, I am not sure I understand why that is a problem though
04:19 < manish7294_> distancePtr = new arma::mat;
04:19 < manish7294_> distancePtr->set_size(k, referenceSet->n_cols);
04:20 < manish7294_> These are the two lines there un starting of search code
04:20 < rcurtin> right, but what I'm saying is the exact same thing is done for the neighbors matrix
04:20 < rcurtin> yet you use the neighbors matrix just fine
04:21 < manish7294_> So, basically we can use the previous distance matrix to relieve knn search from some calculation, right?
04:21 < rcurtin> ah, sorry I think I see the confusion now
04:21 < rcurtin> the idea is not to give the KNN object something that will help the search
04:22 < rcurtin> the idea is to store the distances output from the KNN object so that we can avoid some metric.Evaluate() calls later in the EvaluateWithGradient() function
04:22 < manish7294_> Right, thanks got the point
04:22 < rcurtin> sure, hope that clarified it
04:22 < rcurtin> let me know if not
04:22 < manish7294_> Thanks for keeping up this late :)
04:23 < rcurtin> sure, it's no problem :)
04:23 < rcurtin> I will head to bed now if there's nothing else for now
04:23 < manish7294_> Ya, I got these two ideas while reading that comment but just got too deeo into the obe I was talking about. :)
04:23 < rcurtin> it's ok, I know how it goes :)
04:24 < manish7294_> good night :)
04:24 < rcurtin> I would say 'good night' but it is morning for you, so good morning :)
04:24 < manish7294_> :)
04:24 -!- manish7294_ [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has quit [Quit: Page closed]
05:15 -!- vivekp [~vivek@unaffiliated/vivekp] has left #mlpack ["Leaving"]
06:28 < ShikharJ> zoq: Sorry for troubling you again, but can we merge the two PRs now? That would also help us in our code for GAN Optimizer and WGAN.
06:53 < zoq> ShikharJ: Sure what do you think about adding a simple test?
06:54 < ShikharJ> zoq: Test for the GANs?
06:54 < zoq> ShikharJ: Batch support.
06:55 < zoq> ShikharJ: Ahh, I see we already test GAN with batchSize > 1
06:55 < ShikharJ> zoq: What I was thinking of doing was to uncomment the GANMNISTTest that we have, and set some low hyperparameters.
06:56 < zoq> ShikharJ: Agrred that sounds reasonable.
06:56 < ShikharJ> zoq: Now with the batch support PR, it takes really less time to compute something like a batch of 10, for one epoch, 20 pre-training and 50 maximum inputs.
06:57 < zoq> ShikharJ: Okay the batch support is merged, do you like to incoperate the test in the DGAN PR?
06:58 < zoq> ShikharJ: We can also open a new PR.
06:58 < ShikharJ> zoq: Sure, I'll uncomment all the tests and change the test documentation a bit there. I'm guessing some merge conflicts would also arise in the DCGAN PR after batch support is merged.
06:59 < zoq> ShikharJ: yes
06:59 < zoq> ShikharJ: okay, modifiying the test is a good idea, let's do that :)
07:00 < ShikharJ> zoq: Really happy with the work we've achieved. I'll also tmux a session to see how we currently fare against other libraries!
07:03 < zoq> ShikharJ: Yeah, all this really nice additions and improvements.
07:29 < Atharva> zoq: I have been facing a strange issue since yesterday.
07:29 < Atharva> Certain gradient check tests in ANNLayerTest fail or pass based on their position in the file among other tests.
07:29 < Atharva> With no code changed
07:31 < Atharva> Also, similar issue is, if in the GradientLinearLayerTest, I change the loss to meansquared, then Atrous Convolution test fails
07:33 < Atharva> What I found out was the model.Gradient() call from this tests return all zeros when they fail, but I can't figure out why, nothing else is changing.
07:36 < ShikharJ> Atharva: I also found an issue like that sometime back, though it wasn't showing up on Travis so I ignored it.
07:37 < Atharva> ShikharJ: So the tests don't give any problems on Travis?
07:38 < Atharva> I might as well ignore it then.
07:38 < ShikharJ> They didn't for me. But keep in mind this was some time back. The codebase has changed considerably from thereon.
07:38 < Atharva> I will try and push a commit once and see if they fail.
07:40 < ShikharJ> zoq: Could we have access to the Appveyor builds, they don't seem to have auto branch cancellation feature, and I had pushed a couple of unnecessary builds that I wish to cancel.
09:01 -!- seeni [~seeni@220.225.141.25] has joined #mlpack
09:08 -!- seeni [~seeni@220.225.141.25] has quit [Quit: seeni]
09:28 < zoq> ShikharJ: hm, I thought every mlpack member should be able to start/stop the job, did you use the same github login?
09:29 < zoq> Atharva: What version (last commit) do you use?
09:29 < ShikharJ> zoq: Yes.
09:30 < zoq> ShikharJ: hm, let me disable/enable the setting.
09:30 < zoq> ShikharJ: Okay, can you test again?
09:31 < ShikharJ> zoq: I'll need a running job for that.
09:34 < Atharva> commit 86219b18b5afd23800e72661ab72d0bde0fd7a99
09:34 < ShikharJ> zoq: I still can't cancel the build https://ci.appveyor.com/project/mlpack/mlpack/build/%235195
09:34 < Atharva> merge e08e761 2554f60
09:36 < zoq> ShikharJ: strange, perhaps Atharva could test it as well?
09:36 < zoq> ShikharJ: I can also cancle the build
09:37 < Atharva> zoq: Sorry, what should I test?
09:38 < ShikharJ> zoq: Please cancel all the Implement DCGAN Test builds apart from the latest one https://ci.appveyor.com/project/mlpack/mlpack/history
09:38 < ShikharJ> zoq: There should be two builds
09:40 -!- seeni [~seeni@220.225.141.25] has joined #mlpack
09:47 < zoq> Atharva: Can you test if you are able to cancel the build: https://ci.appveyor.com/project/mlpack/mlpack/build/%235195
09:53 < Atharva> I don't see any options to cancel the build.
09:53 < Atharva> I logged in with the mlpack account
10:04 < Atharva> zoq: Is there a way to do this from the terminal?
10:05 < jenkins-mlpack> Project docker mlpack nightly build build #356: STILL UNSTABLE in 2 hr 51 min: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/356/
10:18 < seeni> i got this error while building " from Cython.Distutils import build_ext
10:18 < seeni> ModuleNotFoundError: No module named 'Cython'
10:18 < seeni> ". But i have Cython installed. How to fix this
11:00 -!- seeni [~seeni@220.225.141.25] has quit [Quit: seeni]
11:26 < zoq> Atharva: thanks for testing, perhaps there is a way to stop the build from the terminal.
11:33 < zoq> ShikharJ, Atharva: Pretty sure it works now.
11:46 < ShikharJ> zoq: Yeah it does, thanks zoq!
12:31 -!- manish7294 [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has joined #mlpack
12:32 < manish7294> rcurtin: I have added matlab benchmarking scripts and have updated the comment accordingly: https://github.com/mlpack/mlpack/pull/1407#issuecomment-398772089
12:32 < manish7294> It seems we can't use a custom k value with matlab lmnn implementation, though I have not dig in the reason behind.
12:34 < manish7294> And the matlab run is taking a way lot amount of memory
12:39 < manish7294> rcurtin: It's regarding the tree building optimization, I have noticed that total tree building time is always very low(merely half a second on letters dataset). So, do you think this optimization will be efficient?
12:50 < manish7294> And regaring the distance caching --- We need to calculate the distance after every iteration as metic.Evaluate() is called on transformed dataset(which changes after every iteration), but taking from your idea, we can avoid this calcuation at least the times(decided by range parameter) when we call impostors (here we will need to cache distance every time impostors is called) and then use it instead for metric.Evaluate. Does i
13:10 < ShikharJ> zoq: I have tmux'd a session, let's see if it shows any improvement over the 3 day runtime that we saw earlier.
13:31 < Atharva> sumedhghaisas: I know we decided on Thurdays 8pm ist, but is possible for you at 10pm ist?
13:32 < Atharva> or about 9:30?
13:32 < manish7294> rcurtin: Just a bumpy thought. It may sound weird, but I am writing it anyway :) ---- Regarding your bounds idea, we are facing the problem of deciding a particular value for it right? Is it possible to have adaptive bounding value just like the adaptive step size.
13:37 < ShikharJ> zoq: As expected, the smaller GAN tests pass within the time bound, can we also merge the DCGAN PR now?
13:50 < zoq> ShikharJ: Okay, left some comments regarding the test.
13:51 < ShikharJ> zoq: Cool.
14:12 -!- manish7294 [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has quit [Ping timeout: 260 seconds]
14:29 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
14:36 < sumedhghaisas> Atharva: Hi Atharva
14:36 < sumedhghaisas> Sure. 10pm works for me as well.
14:36 < sumedhghaisas> If you get free earlier let me know
15:36 -!- travis-ci [~travis-ci@ec2-54-227-123-80.compute-1.amazonaws.com] has joined #mlpack
15:36 < travis-ci> manish7294/mlpack#29 (lmnn - d05cfd3 : Manish): The build has errored.
15:36 < travis-ci> Change view : https://github.com/manish7294/mlpack/compare/8a6709f089b7...d05cfd31cc5e
15:36 < travis-ci> Build details : https://travis-ci.com/manish7294/mlpack/builds/76957776
15:36 -!- travis-ci [~travis-ci@ec2-54-227-123-80.compute-1.amazonaws.com] has left #mlpack []
15:50 < rcurtin> manish7294: a couple comments, sorry that I was not able to respond until now
15:51 < rcurtin> don't worry about a lack of custom k---if the MATLAB script doesn't support it, it's not a huge deal
15:51 < rcurtin> and I am not surprised it takes a huge amount of memory
15:52 < rcurtin> for the tree building optimization, you are right, in some cases tree building can be fast (depends on the dataset)
15:53 < rcurtin> at the same time, unless you've modified the code, it isn't counting the time taken to build the query trees
15:54 < rcurtin> on, e.g., MNIST, tree building takes a much longer time
15:54 < rcurtin> so I think it will be a worthwhile optimization on larger datasets
15:54 < rcurtin> for the distance caching, you are right---we can only avoid the calculation exactly when Impostors() is called
15:55 < rcurtin> for the bumpy thought, I'm not sure I fully understand---for bounding values, the bound will depend on | L_t - L_{t + 1} |_F^2, which is fast to calculate