mlpack IRC logs, 2020-02-16

Logs for the day 2020-02-16 (starts at 0:00 UTC) are shown below.

February 2020
--- Log opened Sun Feb 16 00:00:04 2020
00:04 -!- k3nz0_ [~k3nz0@unaffiliated/k3nz0] has quit [Read error: Connection reset by peer]
00:04 -!- k3nz0__ [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
03:56 -!- k3nz0_ [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
03:59 -!- k3nz0__ [~k3nz0@unaffiliated/k3nz0] has quit [Ping timeout: 268 seconds]
06:29 < Saksham[m]> Hi, if I need to run a specific test do I need to build the whole test suite or Is there any way to build a specific test file which I updated ?
06:31 < KhizirSiddiquiGi> Saksham: you can directly run tests as `./bin/mlpack_test -t RBMNetworkTest/SpikeSlabRBMCIFARTest`
06:32 < KhizirSiddiquiGi> basically in the form `./bin/mlpack_test -t TestSuiteName/TestCaseName`
06:33 < KhizirSiddiquiGi> but to build you will have to build whole test suite.
06:35 < Saksham[m]> Yeah that was my doubt
06:35 < Saksham[m]> Thanks a lot
07:04 < sailor[m]> my build is always stuck at 80% at a certain point in visual studio windows... but i see that in the debug folder, there are 47 .lib files(such as "mlpack_softmax_regression.lib"). Does this mean that I can use just these libraries and run tests on just them?
07:05 < GauravSinghGitte> In [c_relu_impl.hpp]( during the backward propogation, why are rows of matrix 'temp' is taken into consideraton?
07:06 < GauravSinghGitte> When the gradient 'g' is calculated?
07:09 < GauravSinghGitte> (edited) ... is calculated? => ... is calculated
07:10 -!- anirudh [772a9d39@] has joined #mlpack
07:10 -!- anirudh [772a9d39@] has quit [Remote host closed the connection]
08:12 < jenkins-mlpack2> Yippee, build fixed!
08:12 < jenkins-mlpack2> Project docker mlpack nightly build build #615: FIXED in 2 hr 58 min:
08:22 < PrinceGuptaGitte> Hi @zoq , thanks for reviewing my PR #2192 , I've made changes as you suggested. Please have a look when you get time. Thanks.
08:45 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
09:04 < PrinceGuptaGitte> I was training a FFN on MNIST dataset, after training a whole epoch the time printed out is 13s while in reality it
09:05 < PrinceGuptaGitte> (edited) ... reality it => ... reality it is over 1min.
09:05 < PrinceGuptaGitte> I used `ens::ProgressBar()` and `ens::PrintLoss()` callbacks
11:20 -!- k3nz0__ [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
11:23 -!- k3nz0_ [~k3nz0@unaffiliated/k3nz0] has quit [Ping timeout: 260 seconds]
12:00 -!- k3nz0__ [~k3nz0@unaffiliated/k3nz0] has quit [Ping timeout: 240 seconds]
12:11 -!- k3nz0 [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
13:59 < rcurtin> PrinceGuptaGitte: then probably some part that isn't training is taking 47s or more; have you tined all parts of the program?
14:07 -!- k3nz0 [~k3nz0@unaffiliated/k3nz0] has quit [Remote host closed the connection]
14:07 -!- k3nz0 [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
14:15 -!- k3nz0 [~k3nz0@unaffiliated/k3nz0] has quit [Ping timeout: 240 seconds]
14:25 -!- k3nz0 [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
14:31 < PrinceGuptaGitte> It was only the training part that took 1 min. Loading data happened under 10 seconds
14:36 -!- k3nz0 [~k3nz0@unaffiliated/k3nz0] has quit [Ping timeout: 268 seconds]
14:47 -!- k3nz0 [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
14:48 -!- hrivu21 [2a6e887c@] has joined #mlpack
14:48 -!- hrivu21 [2a6e887c@] has quit [Remote host closed the connection]
14:51 -!- hrivu21 [2a6e887c@] has joined #mlpack
14:58 -!- hrivu21 [2a6e887c@] has quit [Remote host closed the connection]
15:21 -!- k3nz0 [~k3nz0@unaffiliated/k3nz0] has quit [Remote host closed the connection]
15:21 -!- k3nz0 [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
15:28 -!- k3nz0_ [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
15:31 -!- k3nz0 [~k3nz0@unaffiliated/k3nz0] has quit [Ping timeout: 265 seconds]
15:57 -!- saksham189Gitter [gitteranct@gateway/shell/] has joined #mlpack
15:57 < saksham189Gitter> @zoq did you get a chance to look at the email I sent you?
16:55 < metahost> zoq: Are we good to go on the PR (ensmallen#149)?
17:10 < kartikdutt18Gitt> Hi @zoq, If you get the chance could you have a look at #2195, I wanted to know on how I should proceed with it. Thanks.
17:17 < rcurtin> PrinceGuptaGitte: okay, did you try to time and profile it to see where the runtime is actually being spent?
17:50 < volhard[m]> <kartikdutt18Gitt "I think in frequency patterns be"> Thanks. I'm new to the whole ML thing.
17:52 < volhard[m]> <metahost "volhard: you may! Tasks like wak"> The network needs to preserve a latent representation for an arbitrary duration. Do CNNs work for this purpose?
17:56 < volhard[m]> Wait, is this off topic?
17:58 < kartikdutt18Gitt> Hi @volhard, People here are more than happy to help. For Fixed duration such as 500ms chirps, I think RNNs / LSTM etc. should be a good idea.
18:30 < volhard[m]> Sorry. The interval between the chirps is 500ms. The chirps last about 50ms (8kHz to 2kHz, exponential drop). I'm also feeding the inertial measurements (Ax,Ay,Az,Gx,Gy,Gz) of the motion of the microphone cluster. The inertial data is very noisy, so I'm not sure if I should pass it as such (I'll try smoothing). The network is to generate depth maps of the enivroment (8kHz implies a wavelength close to 5cm, so not
18:30 < volhard[m]> unreasonable). Thus I extract depth from video (640x480@30fps; 3 channels) for backprop.
18:31 -!- togo [~togo@2a02:6d40:34aa:b301:8089:cdda:34f2:204d] has joined #mlpack
18:37 < volhard[m]> Does the fact that the fft frames stay near static for about 0.5 second (after the ping; 30 samples a second) have anything to do with RNN performance? I'm using GRU for the encoding layers. (actually ConvGRU for weight sharing due to the spatial organization of the microphones).
18:45 < volhard[m]> I've tried training. 69000 frames in total. 300 samples a sequence (worth 10s). Augmented by flipping x/y axes (of the inertial measurements too); otherwise the network falls for the lower_part_of_image-means-closer bias. KL-Divergence weight scheduling from 0.001 to 0.2 over several iterations (MSE stays constant). So far nothing satisfactory.
19:20 < Param-29Gitter[m> Hello @zoq and @rcurtin , please have a look on #2169 once you are free. I wanted to start working on other program and wanted to try to parallelize it but I need you review on the current pr first so that I can understand which programs I should work on next.
19:29 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
19:39 -!- abhi [0e8bf0f7@] has joined #mlpack
19:45 -!- abhi [0e8bf0f7@] has quit [Ping timeout: 260 seconds]
20:07 < zoq> saksham189: Ohh, yeah, just responded.
20:55 < rcurtin> Param-29Gitter[m (and others), please be patiwnt; maintainers will get to the reviews when we can and asking us to do it won't make it happen quicker
20:56 < rcurtin> I see tons of requests for review in this channel every day and honestly it's a bit overwhelming...
20:56 < rcurtin> I'd love to review everything, but I can't get to it all at one time
20:57 < rcurtin> however, I have just now gotten home from a two-week trip and so I should be able to pick up the pace of the reviews a good bit :)
21:37 < metahost> rcurtin: Ryan, I do understand that only maintainers can merge PRs and approve them but can contributors help with the review workload too? I think that may help offload some work :)
22:21 -!- k3nz0__ [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
22:24 -!- k3nz0_ [~k3nz0@unaffiliated/k3nz0] has quit [Ping timeout: 260 seconds]
23:19 -!- UmarJ [~UmarJ@] has joined #mlpack
23:24 -!- togo [~togo@2a02:6d40:34aa:b301:8089:cdda:34f2:204d] has quit [Ping timeout: 246 seconds]
23:40 -!- k3nz0__ [~k3nz0@unaffiliated/k3nz0] has quit [Ping timeout: 272 seconds]
23:58 -!- UmarGitter[m] [gitterumar@gateway/shell/] has joined #mlpack
--- Log closed Mon Feb 17 00:00:05 2020