mlpack IRC logs, 2018-06-13

Logs for the day 2018-06-13 (starts at 0:00 UTC) are shown below.

June 2018
--- Log opened Wed Jun 13 00:00:01 2018
00:07 -!- wenhao [731bc1e7@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
00:13 -!- killer_bee[m] [killerbeem@gateway/shell/] has joined #mlpack
00:31 -!- gmanlan [45b5e554@gateway/web/freenode/ip.] has joined #mlpack
00:34 < gmanlan> rcurtin: where do you think it would be a good place to add a c++ VS sample app referred by one of the new /doc/guide tutorials?
01:53 < rcurtin> gmanlan: hmmm, we could make a doc/examples directory
01:54 < rcurtin> if you do that, I will also put some other example programs in there, I think it could be a good idea
01:55 < gmanlan> great, will do that
02:14 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 268 seconds]
02:17 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
02:49 -!- gmanlan [45b5e554@gateway/web/freenode/ip.] has quit [Quit: Page closed]
03:38 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Read error: Connection reset by peer]
03:40 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
03:40 -!- manish7294 [8ba7d840@gateway/web/freenode/ip.] has joined #mlpack
03:42 < manish7294> rcurtin: Here is a accuracy curve over number of passes on vc2 dataset with k = 5, step size = 0.01, batch size = 50, optimier = amsgrad,
03:44 < manish7294> Will this work?
06:15 < ShikharJ> zoq: Are you there?
07:12 < manish7294> rcurtin: I have posted some resultant graphs on PR, please have a look at them
07:39 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 240 seconds]
07:44 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
10:46 < jenkins-mlpack> Project docker mlpack nightly build build #348: UNSTABLE in 3 hr 32 min:
11:14 -!- witness_ [uid10044@gateway/web/] has quit [Quit: Connection closed for inactivity]
11:26 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 268 seconds]
11:31 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
11:36 < zoq> ShikharJ: yes
11:40 < ShikharJ> zoq: In FFNs, we take each input column as a data point, and each row as a data dimension correct? Then what do each slices represent?
11:42 < sumedhghaisas> Atharva: Hi Atharva
11:42 < sumedhghaisas> Sorry got little busy yesterday
11:43 < sumedhghaisas> Maybe I have a solution for generation problem :)
11:45 < sumedhghaisas> rcutin: Hi Ryan, its super early in Georgia, but maybe you are awake? :)
11:45 < Atharva> sumedhghaisas: Hi Sumedh, don't worry about it.
11:45 < Atharva> Awesome! How?
11:46 < zoq> ShikharJ: That's correct. In case of the RNN class each time slice is a slice of a cube. If I remember right we don't use slices/cubes inside the FFN class.
11:47 < ShikharJ> zoq: I'm asking this because for the CelebA, I need to have 3 channels for each input image. I think we can take each slice for a different channel? What do you think?
11:49 -!- travis-ci [] has joined #mlpack
11:49 < travis-ci> manish7294/mlpack#25 (lmnn - 3415f20 : Manish): The build has errored.
11:49 < travis-ci> Change view :
11:49 < travis-ci> Build details :
11:49 -!- travis-ci [] has left #mlpack []
11:49 < sumedhghaisas> Atharva: Okay. We first support adding entire FFN as layer. Then we create a decoder FFN and encoder FFN, and we merge then together along with a repar layer in another FFN
11:49 < zoq> ShikharJ: Yes, I think we can do this, another option is to vectorize the input, which is necessary at some point anyway if the layer expects a matrix/vector.
11:50 < sumedhghaisas> So for conditional generation, user can use the full FFN, and for unconditional they can just pass the gaussian sample directly in the decoder
11:51 < zoq> sumedhghais Atharva: Perhaps the sequential layer is useful here, which is basically the FFN class but without the Train/Evaluate function.
11:52 < zoq> ShikharJ: But I guess in make sense to provide an interface that accepts arma::cube as input.
11:52 < sumedhghaisas> zoq: ahh yes. That's better. Wait... but we need to Evaluate the decoder for generation
11:52 < ShikharJ> zoq: By vectorizing the input, I guess we'll have to introduce the least amount of changes in the codebase, but the work of preparing the dataset will fall on the user.
11:53 < zoq> sumedhghais: You can still use the Forward function, which is what Evaluate calles anyway.
11:53 < sumedhghaisas> zoq: yeah... that just sleeped my mind
11:55 < zoq> ShikharJ: That depends on how we load the dataset right?
11:56 < sumedhghaisas> zoq: I also wanted to talk to you about the generative layers... by generative layers i mean layers which define distribution over the input. For example the output layer of VAE. Do you think we need to support that?
11:56 < zoq> ShikharJ: But I think using arma::cube is more intuitive, so perhaps a combination of both is the best way?
11:57 < sumedhghaisas> For example, in VAE my tensorflow implementation involves defining a distribution as the last layer and reconstruction loss in basically the log_prob of the input in that distribution
11:57 < ShikharJ> zoq: What I mean to say is that if we expect the input to be a vector irrespective of the number of channels, we'll practically have to do no additional work, as inside the convolution layer, we alias an input point as an arma::Cube(input, rows, cols, slices ...);
11:57 < sumedhghaisas> This generalizes to any distribution and input
11:57 < Atharva> sumedhghaisas: The Evaluate function calls the private function Forward and not the public member
11:58 < zoq> sumedhghais: I see that this could be helpful, but I guess if it's not necessary at this point, it could be delayed?
11:58 < ShikharJ> zoq: But when we take the input point as a cube of dimensions (Rows x 1 x Channels), we'll have to reshape the input data.
11:59 < sumedhghaisas> Atharva: I think Marcus is right. We could use a sequential layerfor decoder and use the Forward function to generate unconditional samples
12:00 < zoq> ShikharJ: What I like to avoid is to add support for arma::cube to each layer, creating an alias should be neglectable.
12:01 < Atharva> sumedhghaisas: Yes, just checked it out
12:01 < sumedhghaisas> zoq: Its not super necessary right now. We could create a loss layer specific to binary images such as MNIST and use it in VAE, but that is as much work as defining the distribution layer. My question is, can the current framework accommodate layer output other than arma::mat?
12:02 < ShikharJ> zoq: I see, then we can just expect the input as a vector point itself.
12:02 < zoq> Atharva: Here is an example that creates two networks and merges the output:
12:02 < sumedhghaisas> zoq: I think it can, as OutputParameter has template class
12:03 < zoq> ShikharJ: What we could do is to provide an interface GAN class that takes arma::cube as input and inside that class we can do something like arma::Mat(slice(.).memptr(), ...)
12:04 < zoq> sumedhghais: I don't mind to modifiy the output layer infrastructure, adding another template parameter should be easy.
12:06 < sumedhghaisas> zoq: I was thinking the same thing but I am not able to judge how difficult is that. If its relatively easy then I would push for templatized distribution layer which will make things very easy for VAE and other generative frameworks
12:07 < sumedhghaisas> as we already have some distributions defined, the framework will by default support various datasets than defining a loss for each specific one
12:07 < ShikharJ> zoq: I think, rather than creating an interface, we can provide a dataset conversion routine for converting the individual Cube inputs into vectorised columns.
12:08 < zoq> sumedhghais: I can help with the implementation.
12:09 < zoq> ShikharJ: That is a good idea. I guess, at some point it make sense to integrate that into the load function, which I think doesn't support arma::cube.
12:09 < sumedhghaisas> zoq: amazing! I think it will also help in the GAN framework?
12:12 < zoq> sumedhghais: Yes, this could be a nice addition.
12:12 < sumedhghaisas> zoq: I think with the distribution layer and sequential layer, do we actuallly require a seperate GAN class? We could do it by having generator and discriminator as sequential layers with generator output layer as distribution layer. What you think? This way the GANs could use variational inference with attached Repar layer to the generator
12:13 < sumedhghaisas> I am no good in GANs but I saw couple of papers using variational inference in GANs
12:14 < sumedhghaisas> zoq: I will add it to the next weeks agenda. This implementation. :)
12:15 -!- manish7294 [8ba7d840@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
12:17 < Atharva> sumedhghaisas: So, what are we exactly going to do with the outputlayer?
12:18 < zoq> sumedhghais: Like the idea to combine both :)
12:19 < sumedhghaisas> Atharva: Okay so the plan is this. Currently each layer stores its output in output parameter
12:20 < sumedhghaisas> The new layer we are going to define will output a distribution, core::dist object rather than a matrix
12:20 < sumedhghaisas> so we make sure this kind of layer can exist in the framework
12:21 < Atharva> Okay, yes
12:21 < sumedhghaisas> the layer in question is a very simple one, it just takes an input and defines a templatized distribution over it
12:22 < Atharva> So, will that replace the repar layer as well
12:22 < Atharva> Or will that be different
12:22 < sumedhghaisas> It will be the output layer for VAE, that way the loss function we define, reconstruction loss becomes super easy, we just check the log_prob in the output distribution
12:23 < sumedhghaisas> Ohh repar layer will stay the same, these changes will affect the decoder output
12:24 < Atharva> Okay, so we are talking about the OutputLayer object, the final layer of networks
12:24 < Atharva> Understood
12:25 < sumedhghaisas> If we implement log_prob and its backward in the distribution we could play aroud with any dataset
12:25 < sumedhghaisas> Okay this is the work for later. How is Jacobian test holding up? :)
12:25 < sumedhghaisas> Lets get the Repar layer completed first.
12:26 < Atharva> Sorry I had some other things yesterday, I will push a commit soon with Jacobian fixed. What do you think remains after that in Repar layer?
12:27 < sumedhghaisas> Atharva: and I will also recommend testing the 2 booleans to the layer. Super easy and short tests but that will make sure that some future user cannot mess up with the implementation
12:28 < Atharva> Okay, about the Jacobian, I have added the extra loss from visitor to the Backward function in FFN as well. Not just the Evaluate
12:29 < Atharva> Sorry, I still can't figure out why exactly it's failing
12:29 < sumedhghaisas> So we added Gradient test, Jacobian test, Forward and Backward and later the booleans, that should cover it :)
12:29 < sumedhghaisas> ohh is it still failing after the changes?
12:30 < sumedhghaisas> or you mean you didn't understanding why it was failing in the first place?
12:30 < sumedhghaisas> *understand
12:30 < Atharva> I haven't added the boolean yet, but the it's failing after adding the extra loss to the Backward function.
12:30 < Atharva> Yes, I haven't exactly understood
12:31 < sumedhghaisas> Ahh okay lets walk you through the Jacobian test
12:33 < sumedhghaisas> Okay so we have to Jacobians, jacobianA, which is the approximate jacobian and jacobianB which is the rela one based on the gradients
12:33 < sumedhghaisas> okay wait, have you studied jacobian?
12:33 < Atharva> I think I have but can't remember anything
12:34 < Atharva> Maybe it's better if I first look it up online and then we discuss it
12:34 < sumedhghaisas>
12:34 < sumedhghaisas> Wikipedia should be enough for basic understanding :) its geometrical interpretation is too complex though
12:35 < sumedhghaisas> yeah :) take a look at what jacobian is and compare it with the computation of jacobianA and jacobianB in our code.
12:36 < Atharva> Okay sure
12:36 < sumedhghaisas> I have a little test if you understand it correctly or not. Try to answer this, what extra term do we have to add in the Forward of the layer to make the jacobian test work with klBackward included in the gradients?
12:37 < sumedhghaisas> is the question clear enough?
12:37 < Atharva> question is clear, I will get back on this
12:37 < sumedhghaisas> great! Good luck!
12:51 < rcurtin> sumedhghaisas: I am awake now :)
12:54 < sumedhghaisas> rcurtin: ahh wanted to talk about the distribution layer issue :) BTW the extra loss collector visitor is added as part of Reparametrization layer PR. :)
12:54 < rcurtin> ok, do you mean the multivariate Gaussian distribution issue?
12:54 < sumedhghaisas> okay so the question regarding distribution layer is, currently all layers output a matrix, can a layer output a dist object?
12:55 < sumedhghaisas> I mean can our current framework support it?
12:55 < rcurtin> I don't think that would make sense, since the next layer would expect a matrix type as input
12:55 < sumedhghaisas> ahh yes, the distribution will go inside the layer, any, not just gaussian
12:55 < rcurtin> what you could do, is output means and covariances as a matrix, and then the next layer could use those directly
12:56 < rcurtin> but I think in that case, you would require that the layer after the one that outputs means and covariances has a specific types
12:56 < rcurtin> i.e. if you put the means and covariances into a linear layer, it probably wouldn't make sense
12:58 < sumedhghaisas> rcurtin: we could make the next layer accept a dist as input. But now I am wondering could we bypass this by templatizing the next layer with distribution and outputting dist parameters as you mentioned
13:00 < sumedhghaisas> So let me see if I can explain the issue in more detail
13:00 < Atharva> sumedhghaisas: From what I could figure out, JacobianA is approximate and B is true. This test is a lot like the gradient check just that we are taking all the dimenions of input into account at once
13:01 < rcurtin> I'm not sure I understand the situation fully. personally, I don't have a problem with either way, but if a layer outputs a non-matrix object, then we have to be very careful to ensure that a user can't add a subsequent layer that accepts a matrix object
13:01 < rcurtin> I wonder if it might be better to make a "combined" layer so that there is no need to output the distributions, you could just use them internally in the "combined" layer
13:02 < rcurtin> but, I don't know VAEs well, so like I said I don't fully understand the problem. you and Atharva know better, I am just proposing ideas based on what I think the problem is :)
13:02 < sumedhghaisas> VAEs output is a distribution, and reconstruction error is basically the log_prob of input in the output distribution. Now if we have such a distribution layer, the reconstruction loss implementation becomes easy and very generic. Now it also helps us in generation, as the output is as actual dist, The Predict function will output the same and user could sample from this distribution
13:03 < rcurtin> oh, I see, so you are not passing the distribution between layers, the distribution is the output of the autoencoder
13:03 < sumedhghaisas> yes... but the distribution layer can be in the middle, for example in GANs
13:04 < sumedhghaisas> in GANs the output of generator is a distribution, sample of which is passed to the discriminator
13:04 < rcurtin> hmm, maybe it is worth looking at how Shikhar implemented this then?
13:05 < sumedhghaisas> in that case, we will have a distribution layer and the next layer will sample from it.
13:08 < sumedhghaisas> rcurtin: yeah I need to look into GAN class in more details. I wanted to make sure user can use variational inference in any model they desire.
13:10 < sumedhghaisas> So the generic design becomes, any component is a Sequential layer as Marcus suggested, this component can be a generative one, which outputs a distribution or a deterministic, all these components connect together with FFN
13:11 < sumedhghaisas> Now after the model is trained with could use the generative components separately as they output a distribution
13:11 < rcurtin> I think that would be reasonable
13:12 < sumedhghaisas> very useful in VAEs and GANs, in VAE the decoder is generative where encoder is deterministic, for sampling we need decoder separately, similar for GANs with generator and discriminator
13:13 -!- travis-ci [] has joined #mlpack
13:13 < travis-ci> manish7294/mlpack#27 (lmnn - 8a6709f : Manish): The build has errored.
13:13 < travis-ci> Change view :
13:13 < travis-ci> Build details :
13:13 -!- travis-ci [] has left #mlpack []
13:14 < sumedhghaisas> the only issue is can a layer output not a matrix in our framework? The problem with next layer's input could be solved with templatization, and some new layers could be defined which accept dist input
13:15 < sumedhghaisas> Atharva: ahh sorry I forgot to reply to you :)
13:15 < sumedhghaisas> Atharva: You are right, jacobianA is the approximate one and jacobianB is the real
13:16 < sumedhghaisas> although jacobianB takes into account also the error signal from KLas we add it in Backward
13:17 < sumedhghaisas> but jacobianA is computed on the forward function which does not add KL to the output
13:17 < sumedhghaisas> thus, these 2 jacobians differ
13:17 < sumedhghaisas> if we add KL to the forward they will become same
13:17 < sumedhghaisas> but KL is a part of loss
13:17 < sumedhghaisas> so we add it separately in the loss
13:52 < Atharva> Yes, in the Repar backward function I have added the kl error as well, but I also add the double kl loss to the total loss (the kl forward function) in the FFN Backward function
13:52 < Atharva> Oh!
13:53 < Atharva> I see, but the JacobianA has no idea of the KL loss
13:54 < Atharva> Okay, I got a little confused because the Forward() function of the FFN class doesn't actually evaluate it
13:56 < Atharva> I will put in the second boolean
13:56 < Atharva> and test the booleans
14:02 -!- wenhao [731bc52a@gateway/web/freenode/ip.] has joined #mlpack
14:06 < rcurtin> sumedhghaisas: I think that it would be hard for a layer to output something that's not a matrix. so the suggestion from my end would be, just output a matrix that holds the means and covariances
14:06 < rcurtin> and when the next layer expects distributions as input, it can just use the passed input matrix that has means and covariances
14:06 -!- manish7294 [8ba79c0c@gateway/web/freenode/ip.] has joined #mlpack
14:07 < manish7294> rcurtin: Did you see the graphs?
14:18 -!- wenhao [731bc52a@gateway/web/freenode/ip.] has quit [Quit: Page closed]
14:24 < rcurtin> manish7294: I saw them, but I have not had a chance to respond
14:24 < rcurtin> it seems to me like there would not be an easy setting to choose; the datasets don't always converge quickly
14:24 < rcurtin> how are we doing with the overall runtime? has it been significantly reduced further?
14:26 < manish7294> rcurtin: I ran the datasets from 1 to 150 passes and the maximum total time was on diabetes dataset(768 points) with value 1 hr 21 mins
14:26 < manish7294> and on iris it was less than 15 mins
14:26 < rcurtin> on iris we really need to be shooting for more like 10 seconds or less
14:27 < rcurtin> where are the bottlenecks still?
14:27 < manish7294> for passes from 1 to 150
14:27 < rcurtin> I have a long flight on Friday and during that time I will be able to try out the pruning idea I have been talking about, but there will not be time for me to do that until then
14:27 < rcurtin> for now, the idea of only recomputing impostors every 100 iterations or so (or whatever number) is ok, I think
14:27 < rcurtin> possibly that could be increased even further
14:28 < manish7294> Ya, it gave some good speed up
14:28 < manish7294> I have pushed it to master
14:28 < rcurtin> the thing is, LMNN is kind of like a preprocessing step for some algorithms, not a learning algorithm itself. so nobody is going to want to wait many hours for a ~1-2% increase in kNN accuracy
14:29 < manish7294> iris 100 passes - 3.6secs
14:29 < manish7294> computing_neighbors: 2.113525s
14:29 < rcurtin> ok, sorry, maybe I misunderstood? I thought you said it took less than 15 minutes (I assumed that to mean it took nearly 15 minutes)
14:29 < rcurtin> on the iris dataset that is
14:29 < manish7294> after every 10 iteration total_runtime = 1.7secs
14:30 < manish7294> 15 mins was for 150 lmnn runs with passes running from 1 to 150
14:30 < rcurtin> ok, I see
14:31 < manish7294> every N iteration idea is giving some good speedups
14:31 < rcurtin> I think we should develop a 'standard benchmark set' of datasets and optimizer configurations so that we can better track the progress
14:32 < rcurtin> do you think that would be worthwhile to do now, and then revisit LMNN to accelerate it?
14:32 < rcurtin> it feels to me a little like we are trying lots of things, but I don't feel like I have the best grasp of what has helped, what has hurt, and how far away we are from where we want to be (with respect to speed)
14:32 < manish7294> Sure, no problem
14:33 < rcurtin> I think that was set up for next week in your proposal, but maybe it is better to get the benchmarking scripts set up now, then use the benchmarking system to test the speed of the code as we improve it
14:33 < rcurtin> if you'd prefer not to do that, that's okay also, but I think it could be helpful
14:34 < manish7294> But will it work without lmnn merging as benchmark has a different repo.
14:35 < manish7294> I will post some runtimes too later today on PR, so you can have a reference.
14:42 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
14:46 < rcurtin> it's not too hard to modify the benchmarking system code to work with a custom mlpack, I can show you how
14:47 < rcurtin> you'd need to do that anyway to test any changes
14:47 < rcurtin> anyway, yeah, if you can post some runtimes on the PR, that would be great
14:48 < rcurtin> I'm sorry about the slowness from my end. the paper took more time than I expected, and technically I am taking a vacation this week so my work for LMNN has been reduced :)
14:48 < rcurtin> but I have time set aside on Friday, since I have a long flight. it will be perfect
14:48 < rcurtin> (for getting some code written that is)
14:53 < manish7294> rcurtin: no worries! Everything is good till now :)
14:58 < rcurtin> :)
15:08 < manish7294> rcurtin: I have added some runtimes over iris, I hope they atleast help a bit
15:09 -!- manish7294_ [8ba79c0c@gateway/web/freenode/ip.] has joined #mlpack
15:09 < rcurtin> manish7294: thanks, any chance you could also do the same for another dataset like vc2 or covertype-5k? (or maybe the full covertype? maybe that takes too long though)
15:11 < manish7294_> sure, I will do for vc2, it will be a bit fast and comfortable, if it is okay? :)
15:12 -!- manish7294 [8ba79c0c@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
15:16 -!- manish7294_ [8ba79c0c@gateway/web/freenode/ip.] has quit [Quit: Page closed]
15:18 < rcurtin> yeah, that's fine
15:29 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Read error: Connection reset by peer]
15:30 -!- manish7294 [8ba79c0c@gateway/web/freenode/ip.] has joined #mlpack
15:31 < manish7294> rcurtin: Done! Added a comment regarding vc2 benchmarks on PR.
15:31 -!- manish7294 [8ba79c0c@gateway/web/freenode/ip.] has quit [Client Quit]
15:32 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
15:47 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Read error: Connection reset by peer]
15:50 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
16:16 < Atharva> sumedhghaisas: I updated the PR. Also, after Ryan's remarks, what should we decide to do finally, so that I can get started?
16:16 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Read error: Connection reset by peer]
16:20 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
16:30 -!- vpal [~vivek@unaffiliated/vivekp] has joined #mlpack
16:32 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 264 seconds]
16:32 -!- vpal is now known as vivekp
16:36 -!- yaswagner [4283a544@gateway/web/freenode/ip.] has joined #mlpack
16:36 < sumedhghaisas> Atharva: Sorry got little busy
16:37 < sumedhghaisas> So what we can do is, for now create a reconstruction loss layer which accepts distribution as a template parameter
16:38 < sumedhghaisas> The loss layer will take VAE output, define a distribution over it and compute the loss
16:38 < sumedhghaisas> For this 2 functions need to be defined in distributions, log_prob and its backward
16:41 < sumedhghaisas> For testing, we will make VAE output single variable gaussian distribution so we could use the currently defined distribution
16:42 < sumedhghaisas> ahh but the batch dimension will still create a problem
16:43 < sumedhghaisas> okay. Lets create NormalDistribution in dist first, thats the next task
16:43 < Atharva> with mean and std, right
16:43 < ShikharJ> wife
16:43 < sumedhghaisas> Sorry if I am confusing you too much :) I will send out a mail detailing the task
16:44 < sumedhghaisas> Atharva: ahh you are right
16:44 < ShikharJ> Ah sorry, some random autofill typing :(
16:44 < sumedhghaisas> So are you clear on the NormalDistribution task?
16:44 < Atharva> Yeah, a mail will be nice :)
16:45 < ShikharJ> zoq: Are you there?
16:45 < sumedhghaisas> Atharva: Lets clear as much as we can here and then summaries it in a mail
16:45 < Atharva> about the distribution, for the reconstruction loss, how can we calculate it with just the distribution?
16:45 < Atharva> Yes
16:46 < sumedhghaisas> Okay thats the next task :) We can also discuss that right now, but first lets be clear on the upcoming task, then I will try to explain this
16:46 < Atharva> I mean, using the data, we would take the mean sqaured or negative log likelihood
16:46 < Atharva> yeah right
16:46 < Atharva> we will discuss it later
16:50 < Atharva> Just to confirm, what will be the private members we will have? Because the GaussianDistribution class has mean, covariance, covLower, invCov, lodDetCov
16:53 < Atharva> I also think I should open a different PR for this
16:57 < sumedhghaisas> Atharva: We have similar private functions
16:58 < sumedhghaisas> mean, variance, log_prob, log_prob backward
16:59 < sumedhghaisas> For now this should achieve the result
17:01 < sumedhghaisas> The distribution will accept a matrix of means, matrix of stddev to create a distribution
17:01 < sumedhghaisas> shape checking must be done in the constructor
17:02 < sumedhghaisas> log_prob should accept a matrix of same shape and return a log probability of that matrix in the distribution
17:02 < sumedhghaisas> log_prob_backward should accept the same matrix and return the gradient of log_prob given that matrix
17:03 < sumedhghaisas> And yes, definitely separate PR for this :)
17:03 < Atharva> Okay, everything understood, I will get on this.
17:03 < sumedhghaisas> Great!
17:04 < Atharva> Also, whenever you are free, do review the sampling PR
17:04 < Atharva> I think it's done and furthur work should go in new PRs
17:10 < sumedhghaisas> Atharva: I am taking a look at it now :) But mostly everything looks good... If I don't find anything I will merge it in tomorrow :)
17:10 < sumedhghaisas> and yes, All future work should go in new PRs
17:11 < Atharva> I don't know why the appveyor build keeps failing, I will have to check what it says, it has failed for all the commits till now.
17:12 < Atharva> It builds without problem on my pc
17:17 < sumedhghaisas> Have to run outside for some time. Try to fix the AppVeyor build, if it doesn't work I will take a look tomorrow.
17:17 < Atharva> Don't worry about it, I will figure it out.
17:47 < yaswagner> Hi guys! i'm trying to build bindings for Go, and to do so I first need to make a C API. Im working on trying to bind the CLI, so i'm working with the mlpack/core/util/ directory right now. I'm having trouble compiling my code. Is there a way for me to compile my files using cmake, without having to recompile the whole library?
17:50 < rcurtin> yaswagner: cmake should only recompile the files that are needed, so if you've modified a core file it may need to recompile a lot
17:51 < rcurtin> if you just need the library, you could do 'make mlpack' and this could save some time
17:52 < yaswagner> ok perfect thank you!
17:52 < rcurtin> also 'make -jN mlpack' will use N cores, which can help
17:52 < rcurtin> (substitute N with however many cores you want to use of course)
17:59 < yaswagner> Will do. I am not modifying a core file, im adding .h header files so I think just using should work!
18:07 < rcurtin> if nothing is actually being compiled, you can also do 'make mlpack_headers' which just moves all the source files from src/ into the build directory
18:11 < rcurtin> hope that helps, let me know if I can help with anything else :)
18:11 < rcurtin> I am going to step out to change some brake lines on my car now... I'll be back in a little while
18:11 < rcurtin> need somebody to press down on the brake pedal but I suspect nobody in this channel can help with that :)
18:12 < yaswagner> Perfect! will let you know if im still stuck
18:24 -!- witness_ [uid10044@gateway/web/] has joined #mlpack
19:04 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Read error: Connection reset by peer]
19:20 < zoq> rcurtin: I could help you out if you wait like 24 hours?
19:26 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
20:02 < rcurtin> zoq: ;)
20:02 < rcurtin> I think Emily will come home in the next few hours and I will ask her to do it :)
20:03 < zoq> probably faster and at least for me cheaper :)
20:18 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
--- Log closed Thu Jun 14 00:00:03 2018