mlpack IRC logs, 2020-02-19

Logs for the day 2020-02-19 (starts at 0:00 UTC) are shown below.

February 2020
Sun
Mon
Tue
Wed
Thu
Fri
Sat
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
29
--- Log opened Wed Feb 19 00:00:08 2020
00:46 -!- togo [~togo@2a02:6d40:34aa:b301:8089:cdda:34f2:204d] has quit [Quit: Leaving]
01:46 < Saksham[m]> ryan I see that is there is no current implementation of Denoising autoencoder, i would like to work on adding this! ?
01:55 -!- k3nz0_ [~k3nz0@unaffiliated/k3nz0] has quit [Ping timeout: 260 seconds]
02:34 < rcurtin> Saksham[m]: sounds good to me, do we need any special layers for it or anything?
02:35 < Saksham[m]> I don’t think we do, while going through the implementation if I require something, I’ll see to it then
03:13 < rcurtin> Saksham[m]: sounds good then; maybe it makes sense to add into the models/ repo? or perhaps into its own directory in mlpack/methods/? I'm not sure exactly what you're thinking, just tossing some ideas out there :)
03:16 < Param-29Gitter[m> Hey @rcurtin regarding #2169 do you want me to implement OpenMP along with SIMD block? because it works as you expect it to work.
03:33 < rcurtin> HimanshuPathakGi: hmm, maybe BOOST_ALL_DYN_LINK is needed? that code snippet you pasted, have you tried it? if it works I have no problem including it as a patch into mlpack
03:34 < rcurtin> Param-29Gitter[m: right, yes, the way to accelerate it will be a combination of OpenMP and SIMD like I said; if you can show good speedup for large sizes of labels arrays, and if there is not significant slowdown if OMP_NUM_THREADS=1 then I think it would be nice to include
03:50 < Param-29Gitter[m> Ok thanks :)
03:50 -!- travis-ci [~travis-ci@ec2-52-23-174-165.compute-1.amazonaws.com] has joined #mlpack
03:50 < travis-ci> mlpack/ensmallen#668 (conradsnicta-readme-cmake-note - 0ac60f2 : Conrad Sanderson): The build passed.
03:50 < travis-ci> Change view : https://github.com/mlpack/ensmallen/commit/0ac60f2e0fe5
03:50 < travis-ci> Build details : https://travis-ci.org/mlpack/ensmallen/builds/652298693
03:50 -!- travis-ci [~travis-ci@ec2-52-23-174-165.compute-1.amazonaws.com] has left #mlpack []
03:51 < Param-29Gitter[m> I have also added my views on the above. Please have a look once you are free.
04:01 -!- UmarJ [~UmarJ@111.68.97.205] has quit [Ping timeout: 265 seconds]
04:21 -!- UmarJ [~UmarJ@111.68.97.205] has joined #mlpack
04:49 < Saksham[m]> Ryan Curtin> also how about Depth Gated RNN layers, i want to add this first, as i was going through some literature for a research project and came across this as recent improvement in this field . I've also referenced the paper <https://arxiv.org/pdf/1508.03790v2.pdf
05:04 -!- UmarJ [~UmarJ@111.68.97.205] has quit [Ping timeout: 265 seconds]
05:07 -!- UmarJ [~UmarJ@111.68.97.205] has joined #mlpack
05:51 -!- UmarJ [~UmarJ@111.68.97.205] has quit [Ping timeout: 260 seconds]
05:52 -!- UmarJ [~UmarJ@111.68.97.205] has joined #mlpack
06:08 -!- UmarJ [~UmarJ@111.68.97.205] has quit [Ping timeout: 265 seconds]
06:10 -!- UmarJ [~UmarJ@111.68.97.205] has joined #mlpack
06:14 -!- UmarJ [~UmarJ@111.68.97.205] has quit [Ping timeout: 240 seconds]
06:16 -!- UmarJ [~UmarJ@111.68.97.205] has joined #mlpack
06:40 -!- UmarJ [~UmarJ@111.68.97.205] has quit [Ping timeout: 272 seconds]
08:15 < jenkins-mlpack2> Yippee, build fixed!
08:15 < jenkins-mlpack2> Project docker mlpack nightly build build #618: FIXED in 3 hr 1 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/618/
09:46 -!- UmarJ [~UmarJ@111.68.97.205] has joined #mlpack
10:27 -!- k3nz0_ [~k3nz0@unaffiliated/k3nz0] has joined #mlpack
10:54 -!- UmarJ [~UmarJ@111.68.97.205] has quit [Ping timeout: 260 seconds]
11:44 -!- Abhi65 [75d4f66f@117.212.246.111] has joined #mlpack
11:45 -!- Abhi65 is now known as AbhiSaphire
11:49 < AbhiSaphire> hey folks,
11:51 -!- Netsplit *.net <-> *.split quits: volhard[m], AbhinavvermaGitt, RohitKartikGitte, outmanipulateGit
11:51 -!- AbhiSaphire [75d4f66f@117.212.246.111] has quit [Remote host closed the connection]
11:53 -!- AbhinavvermaGitt [gitterab5@gateway/shell/matrix.org/x-bkiksbkwvqcklxks] has joined #mlpack
11:54 -!- outmanipulateGit [gitteroutm@gateway/shell/matrix.org/x-uzyoxrympbmvvdro] has joined #mlpack
11:54 -!- volhard[m] [volhardmat@gateway/shell/matrix.org/x-eubvluznwwwpaqcd] has joined #mlpack
11:54 -!- RohitKartikGitte [gitteroo7k@gateway/shell/matrix.org/x-fwdbuufyccccsgng] has joined #mlpack
12:16 -!- Keyur[m] [slackmlp8@gateway/shell/matrix.org/x-sccxgjxqrkagotmz] has quit [Ping timeout: 252 seconds]
12:16 -!- Keyur[m] [slackmlp8@gateway/shell/matrix.org/x-jjracghjxoqfmiol] has joined #mlpack
12:21 -!- AbhiSaphire [75d4f66f@117.212.246.111] has joined #mlpack
12:33 < AbhiSaphire> Hello everyone, my name is abhishek and I am a pre-final year CSE grad from India. I am very much interested in contributing to one of the ideas for GSOC 2020 "Application of ANN Algorithms implemented in mlpack" as a student participant for GSOC 2020. Can anyone help me know where do I start from ?
12:37 * LakshyaOjhaGitte sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/OKudmwmbxIKGoSidxosBpSLE >
12:38 < LakshyaOjhaGitte> Hi, can anyone tell me How can I write this code in accordance to the doxygen syntax.
12:38 < LakshyaOjhaGitte> Thanks.
12:38 < LakshyaOjhaGitte> ![Screenshot from 2020-02-19 17-59-35](https://user-images.githubusercontent.com/57477999/74834554-a554d000-5341-11ea-8b6b-e8a2897428a7.png)4
12:40 < PrinceGuptaGitte> Hi @AbhiSaphire see https://www.mlpack.org/gsoc.html for GSOC and https://www.mlpack.org/community.html to get started. Thanks for showing interest.
12:49 < PrinceGuptaGitte> Hi @zoq , I was looking through GSOC idealist and I've a good idea for **Application of ANN Algorithms Implemented in mlpack**. Since MLPack already has convolution layers implementing Object Detection using YOLO algorithm seems like a nice idea to me. However I'm unsure if it's enough.
12:49 < PrinceGuptaGitte> (edited) ... convolution layers implementing ... => ... convolution layers, implementing ...
13:02 < AbhiSaphire> PrinceGuptaGitte You can also add batch normalization on all of the convolutional layers in YOLO get more improvement. Batch normalization will also help in regularizing the model and prevent overfitting. And thanks for your help O:3
13:05 < PrinceGuptaGitte> Yes, batch norm has been useful many times.
13:06 < PrinceGuptaGitte> And MLPack already have it implemented
13:19 -!- AbhiSaphire [75d4f66f@117.212.246.111] has quit [Remote host closed the connection]
13:27 < zoq> PrinceGupta: I like the YOLO idea.
13:28 < PrinceGuptaGitte> That's good to hear.
13:28 < PrinceGuptaGitte> However, I had one doubt. Does MLPack use GPU? Because I couldn't find it and training the model with CPU could take a lot of time
13:29 < kartikdutt18Gitt> Hi @prince776, currently mlpack doesn't support GPU.
13:29 < zoq> PrinceGupta: You could use nvblas, or maybe we could make use of bandicoot.
13:30 < PrinceGuptaGitte> @zoq I believe NVBLAS will act as backend for armadillo, right?
13:30 < zoq> right
13:31 < PrinceGuptaGitte> I looked through all activation function codes and some actually manually loop and then apply the function, instead of using armadillo functions.
13:31 < PrinceGuptaGitte> So I think we'll need to fix them.
13:32 < zoq> That would probably be a good idea.
13:32 < kartikdutt18Gitt> Agreed that's why I opened #2178 to benchmark the differences
13:33 < PrinceGuptaGitte> @kartikdutt18 that's what I was wondering, how was GPU backend slower than manual for loops.
13:34 < kartikdutt18Gitt> @zoq, If you get the chance, could you have a look at #2195 ( I wanted to know how I should proceed).
13:36 < kartikdutt18Gitt> @prince776 , exactly what I thought, Matrix operation (with parallel computation) should be faster.
13:37 < KhizirSiddiquiGi> @kartikdutt18 , doesn't GPU usage in armadillo be better than using it in MLPack?
13:37 < KhizirSiddiquiGi> I mean, matrix operations in armadillo.
13:39 < PrinceGuptaGitte> @khizirsiddiqui yes and to test that @kartikdutt18 did some tests #2178, but normal for loops performed better.
13:39 < kartikdutt18Gitt> Yes they should be , I haven't tested them with GPU yet, with BLAS I got some contradictory results to what logic dictates so I closed the above PR once I am certain redo all benchmarks and I am certain that the changes I made are faster, I will reopen it.
13:53 < kartikdutt18Gitt> @prince776 , sorry about this the misinformation regarding the GPU. I thought armadillo could only be optimised by BLAS/ OpenBLAS.
14:04 < SriramSKGitter[m> What benefits will bandicoot offer over NVBLAS?
14:08 < Param-29Gitter[m> Hey @rcurtin I have made the changes #2169 please have a look once you are free. We get almost same time with/without use of SIMD . Also i would like to make same changes to information_gain for better performance using OpenMP.
14:10 < sreenik[m]> freenode_gitter_sriramsk1999[m]: When you use frameworks like tensorflow or pytorch, the gpu operations are done using cuBlas, which means that the entire model is transferred onto the gpu and the operations are carried out by the GPU. Currently, Armadillo does not support CuBlas, it only supports nvblas. With nvblas, on the other hand, the gpu is used to perform computations but the entire model is not transferred to the
14:10 < sreenik[m]> GPU at once, it is done operation by operation and varies model to model, which means that there is a significant overhead in transferring values to the GPU and it brings down the advantage of fast computations on a gpu. This is what I remember I found when I had the same doubt, zoq once do confirm if it's correct
14:11 < sreenik[m]> Bandicoot, I guess, uses cuBlas or something similar
14:14 < SriramSKGitter[m> @sreenik : Isn't NVBLAS built on top of cuBLAS?
14:14 < rcurtin> Saksham[m]: that sounds good to me
14:56 < rcurtin> I dropped the ball on the video chat announcement for today, but anyway, 2200 UTC (7 hours from now)
14:57 < rcurtin> last time we used this time everyone unanimously agreed that it was a bad time, so if nobody says it's a good time this time, we can just switch always to thursdays at 1800 UTC
15:00 < zoq> Either is fine for me, a little bit late, but still works.
15:06 < Param-29Gitter[m> @rcurtin how do i ensure my program is compiled using SIMD instructions?
15:10 < PrinceGuptaGitte> It'll be 3:30 AM in my timezone. I think 1800 UTC is better. How do I access the video chat though?
15:11 < Saksham[m]> Anytime would work, how we do access it ?
15:11 < Saksham[m]> Would love to be there
15:11 < rcurtin> PrinceGuptaGitte: I sent a message to the mailing list: http://knife.lugatgt.org/pipermail/mlpack/2020-February/004178.html
15:12 < rcurtin> Param-29Gitter[m: that's a bit outside the scope of what I can write in a chat message; I'd suggest using a search engine to find more information about how to get the instruction-level output of a compiler
15:12 < GauravSinghGitte> @rcurtin Yeah, Thursday at 1800 UTC will be fine.
15:13 -!- zoso_floyd [~zoso_floy@2409:4042:2399:e2fa:84aa:b94d:8c12:d30] has joined #mlpack
15:14 < rcurtin> GauravSingGitte: right, the reason we do it in different timezones each time is to make sure that anyone in any time zone can attend either of them
15:14 -!- zoso_floyd [~zoso_floy@2409:4042:2399:e2fa:84aa:b94d:8c12:d30] has quit [Client Quit]
15:35 < jeffin143[m]> Probably will attend the Thursday one , not a morning person :)
15:37 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
15:54 < Param-29Gitter[m> @rcurtin Yes, Thusday 1800 UTC will be fine.
16:18 < KhizirSiddiquiGi> @rcurtin Thursday 1800 UTC please.
17:44 -!- AbhiSaphire [75d4f66f@117.212.246.111] has joined #mlpack
17:47 -!- AbhiSaphire [75d4f66f@117.212.246.111] has quit [Remote host closed the connection]
18:07 -!- tae [2d769ff2@45.118.159.242] has joined #mlpack
18:09 -!- tae [2d769ff2@45.118.159.242] has quit [Remote host closed the connection]
18:10 -!- k3nz0_ [~k3nz0@unaffiliated/k3nz0] has quit [Remote host closed the connection]
19:14 < HimanshuPathakGi> Hey rcurtin I tried to add a patch But I think it's not working
19:15 < HimanshuPathakGi> May be because we are not initializing git inside the source zip of mlpack3.2.2
19:15 < HimanshuPathakGi> Any idea how can I do this
19:21 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
20:05 * LakshyaOjhaGitte sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/yWZWYsppDggmkWuorPnNlNZc >
20:06 < LakshyaOjhaGitte> Actually the main problem is how to represent the piecewise function here.
20:29 < rcurtin> LakshyaOjhaGitte: wrap it in a formula block (@f[ ... @f]) and then use LaTeX syntax
20:29 < rcurtin> http://www.doxygen.nl/manual/formulas.html
20:30 < PrinceGuptaGitte> Hi, I'm working on implementing Residual Block for making ResNets. I opened Issue #2225 describing the implementation. I wanted to make sure the method I chose is appropriate so I can proceed further with it.
22:09 < jenkins-mlpack2> Project ensmallen.org website build build #69: FAILURE in 1.5 sec: http://ci.mlpack.org/job/ensmallen.org%20website%20build/69/
22:09 < jenkins-mlpack2> Ryan Curtin: Release version ...
22:10 < jenkins-mlpack2> Yippee, build fixed!
22:10 < jenkins-mlpack2> Project ensmallen.org website build build #70: FIXED in 2.8 sec: http://ci.mlpack.org/job/ensmallen.org%20website%20build/70/
22:10 < jenkins-mlpack2> Ryan Curtin: Fix release artifact.
22:24 -!- travis-ci [~travis-ci@ec2-3-87-60-170.compute-1.amazonaws.com] has joined #mlpack
22:24 < travis-ci> mlpack/ensmallen#675 (2.11.3 - a5891f1 : Ryan Curtin): The build passed.
22:24 < travis-ci> Change view : https://github.com/mlpack/ensmallen/compare/8663a12fac72^...a5891f11a262
22:24 < travis-ci> Build details : https://travis-ci.org/mlpack/ensmallen/builds/652716892
22:24 -!- travis-ci [~travis-ci@ec2-3-87-60-170.compute-1.amazonaws.com] has left #mlpack []
22:25 -!- travis-ci [~travis-ci@ec2-54-82-140-126.compute-1.amazonaws.com] has joined #mlpack
22:25 < travis-ci> coatless/ensmallen#1 (require-history-entry - f84655b : James Balamuta): The build passed.
22:25 < travis-ci> Change view : https://github.com/coatless/ensmallen/compare/7f13c8beacd3^...f84655b9592e
22:25 < travis-ci> Build details : https://travis-ci.com/coatless/ensmallen/builds/149718980
22:25 -!- travis-ci [~travis-ci@ec2-54-82-140-126.compute-1.amazonaws.com] has left #mlpack []
22:27 < rcurtin> HimanshuPathakGi: do you think that BOOST_ALL_DYN_LINK conflicts with BOOST_ALL_NO_LINK?
22:28 < rcurtin> also, probably you tried this a long time ago, but what happens when you build with -DBUILD_SHARED_LIBS=OFF?
22:31 -!- travis-ci [~travis-ci@ec2-34-201-119-51.compute-1.amazonaws.com] has joined #mlpack
22:31 < travis-ci> coatless/ensmallen#2 (require-history-entry - d6051fb : James Balamuta): The build passed.
22:31 < travis-ci> Change view : https://github.com/coatless/ensmallen/compare/f84655b9592e...d6051fb39341
22:31 < travis-ci> Build details : https://travis-ci.com/coatless/ensmallen/builds/149719858
22:31 -!- travis-ci [~travis-ci@ec2-34-201-119-51.compute-1.amazonaws.com] has left #mlpack []
22:42 < rcurtin> call for release-blocking issues and PRs for 3.3.0: http://knife.lugatgt.org/pipermail/mlpack/2020-February/004179.html
23:11 < sreenik[m]> Is the video meet over? Looks like I missed it by a whisker
23:48 < zoq> sreenik[m]: Yes, the next is going to be in two weeks.
--- Log closed Thu Feb 20 00:00:10 2020