mlpack IRC logs, 2020-04-01
Logs for the day 2020-04-01 (starts at 0:00 UTC) are shown below.
--- Log opened Wed Apr 01 00:00:53 2020
06:43 < ArunavShandeelya> <abhisaphire[m] "Hey Arunav Shandeelya: "> Thank you, I am looking into it.
06:56 -!- jenkins-mlpack2 [~PircBotx@knife.lugatgt.org] has quit [Ping timeout: 265 seconds]
07:00 -!- jenkins-mlpack2 [~PircBotx@knife.lugatgt.org] has joined #mlpack
07:24 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
11:25 < PrinceGuptaGitte> Hi @kartikdutt18, can you tell me how did you make the `LSTM_MULTIVARIATE_TIME_SERIES.ipynb` file in your PR in examples repo(like did you use Jupyter or something else).
11:25 < PrinceGuptaGitte> I was thinking if it'll be a good idea to add some basic CV examples in notebook format.
11:33 < kartikdutt18Gitt> Yes it will be, I used Xeus-Cling kernel for Jupiter notebook. However with the current implementation I faced some issues so zoq will be opening a PR with some examples as well so I think that would be a better reference than what I did. Also CV notebook would be even better since we could plot some images etc.
11:34 < kartikdutt18Gitt> Here is the GitHub link for [xeus-cling](https://github.com/jupyter-xeus/xeus-cling).
11:36 < PrinceGuptaGitte> Oh nice, then I'll wait till zoq open a PR. Thanks for the info.
11:37 < PrinceGuptaGitte> I'll also try with this notebook though, it seems cool
11:37 < kartikdutt18Gitt> You can try running it till then with the examples maybe it might work on your system cause its really cool compared to normal cpp files.
11:39 < himanshu_pathak[> Yeah nice idea I tried to run kartikdutt18 (Gitter): your example LSTM I thought their is any problem with my setup
11:41 < himanshu_pathak[> Every time it run new shell it disconnect and delete previous history I don't know why
11:41 < himanshu_pathak[> *When I run code cell
11:50 < kartikdutt18Gitt> I am not sure why either, on my machine it only runs one epoch. The cpp version works fine so I don't know why this is happening.
11:57 < himanshu_pathak[> May be somewhere in mlpack code we are using any feature which xeus-cling compiler does not support not sure about this or may be we are doing any silly thing not sure
11:59 < kartikdutt18Gitt> I am pretty sure its the latter, don't know what I am missing.
12:05 < himanshu_pathak[> Yeah that one has higher probability of happening. I will get back to you if I find something.
12:28 < LakshyaOjhaGitte> @favre you online?
13:15 < kartikdutt18Gitt> himanshu_pathak, Great.
13:28 -!- favre49 [~email@example.com] has joined #mlpack
13:28 < favre49> LakshyaOjhaGitte: now I am
13:43 < LakshyaOjhaGitte> wanted to know if fold layer is a good addition?
13:43 < LakshyaOjhaGitte> https://pytorch.org/docs/stable/nn.html#fold
13:43 < LakshyaOjhaGitte> @sreenik also, what do you suggest if I should proceed with this?
13:53 < favre49> Hmmm while I guess feature parity with pytorch is a good thing, I'm not sure I understand the utility of their fold layer?
13:56 < LakshyaOjhaGitte> Actually I thought the same, had a talk with @kartikdutt18 also regarding this but I thought I should ask first. I think @sreenik will be able to give a good insight also
13:56 < LakshyaOjhaGitte> (edited) ... insight also => ... insight also.
13:57 < sreenik[m]> favre49: Yes even I didn't get a very good idea of its use. I believe this has something to do with 4-d tensors
14:00 < favre49> I came across this: https://discuss.pytorch.org/t/some-question-about-defining-a-new-pool-layer/33868
14:01 < Saksham[m]> Has it been used in literature somewhere ?
14:02 < kartikdutt18Gitt> Hey sreenik, favre49, as far as I understood, the unfold portion returns reshaped input to perform any operation along an axis, I think for this purpose we can simply change inputW inputH for the next layer since we pass matrices in layers. What do you think?
14:09 < kartikdutt18Gitt> Does that make sense?
14:21 < jeffin143[m]> https://twitter.com/sseraphini/status/1245061383466758144?s=19
14:21 < jeffin143[m]> :)
14:24 < sreenik[m]> kartikdutt18: that looks like an adaptation of the torch layer but given mlpack's structure its importance/usage also needs to be weighed.
14:29 < LakshyaOjhaGitte> Yes, valid point.
14:29 < LakshyaOjhaGitte> Found something on matlab might help with this discussion
14:29 < LakshyaOjhaGitte> https://in.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.sequencefoldinglayer.html#mw_e600a552-2ab0-48a8-b1d9-ae672b821805
14:30 < LakshyaOjhaGitte> they are saying to apply convolution operation individually at each time step
14:30 < kartikdutt18Gitt> > kartikdutt18: that looks like an adaptation of the torch layer but given mlpack's structure its importance/usage also needs to be weighed.
14:30 < kartikdutt18Gitt> That makes sense.
14:36 < LakshyaOjhaGitte> I think it is better used in pytorch because it has n dimensional data.
14:41 < AbishaiEbenezerG> hi @rcurtin . with reference to #2347 , i checked out my CMakeLists and it was pretty different compared to the one posted on the issue. In fact , i had got the same error a few weeks ago where the preprocessor was complaining about armadillo
14:41 < AbishaiEbenezerG> and it was corrected with a simple make install
14:42 < rcurtin> AbishaiEbenezerG: I think the bug report isn't about the CMakeLists.txt in the mlpack repository, it looks to me like the user is building mlpack into a downstream application with its own CMake configuration (which is the one that was posted)
14:44 < AbishaiEbenezerG> oh ok
14:45 < AbishaiEbenezerG> but how do you know its a downstream application @rcurtin
14:46 < AbishaiEbenezerG> and also, i could not find FindMLPACK.cmake file in the version of mlpack i have...
14:47 < AbishaiEbenezerG> Just am kinda curios to know why there are differences...
14:51 < jeffin143[m]> rcurtin (@freenode_rcurtin:matrix.org): done with image binding , do take a look before you release
14:52 < jeffin143[m]> And also shuffle data solit
14:53 < AbishaiEbenezerG> sorry @rcurtin totally my bad. I found the FindMLPACK.cmake file
14:53 -!- favre49 [~firstname.lastname@example.org] has quit [Remote host closed the connection]
14:56 < rcurtin> AbishaiEbenezerG: no worries :)
14:56 < rcurtin> AbishaiEbenezerG: I knew it was downstream because of the first sentence of the issue: "Hello, I have a problem with building my package that uses mlpack." :)
14:57 < rcurtin> jeffin143[m]: awesome, thanks---we decided to push #1366 to a future release so probably #2019 (and the shuffle data split, I think that is close) are close to the last things
14:59 < himanshu_pathak[> Hey rcurtin: Can you check #2324 I think that is also necessary to merge before release Adding copy constructor
15:04 < himanshu_pathak[> You mentioned this in #2326
15:06 < jeffin143[m]> rcurtin : 1366 probably would be a great support
15:06 < jeffin143[m]> I am following it closely :)
15:07 < rcurtin> jeffin143[m]: yeah, agreed, it can help clean things a lot
15:08 < rcurtin> himanshu_pathak[: sure, I agree, the copy constructors working correctly would be nice. I'll try to review #2324 today, and if it's close, we can hold up the release for it, but if it looks like there's still a long way to go, maybe not worth it (that could be a part of a 3.3.1 release)
15:09 < jeffin143[m]> rcurtin (@freenode_rcurtin:matrix.org): when are you planing for the release ??
15:29 < rcurtin> hehe, three weeks ago? :-D
15:30 < rcurtin> I think it can be done this week. but probably we shouldn't trust my estimates anymore, or ever :)
15:30 < rcurtin> I made some nice steps forward with the automated ensmallen release process, so I'm hoping maybe I can do the same for mlpack---then, later, we won't need me to do these big manual releases, and it will be much easier to have quicker releases
17:02 < metahost> rcurtin: how do you decide on the release schedule? (As in when to release, what to include etc.)
17:03 -!- favre49 [~email@example.com] has joined #mlpack
17:14 -!- favre49 [~firstname.lastname@example.org] has quit [Quit: Lost terminal]
17:22 < rcurtin> metahost: ...it's basically arbitrary :) one of the things that I have always thought is that I've always wanted to release mlpack more often, but it's tricky because there's often so much in motion
17:22 < rcurtin> I think that, if I can automate release scripts, we can adopt a policy of "release every handful of weeks"
17:22 < rcurtin> I've made some attempts to have scripts do this, but they're not perfect (...yet :))
17:22 < rcurtin> time is really the limiting factor though
17:27 < LakshyaOjhaGitte> What is the scenario at your place?
17:28 < jeffin143[m]> Asia largest slum reports first death due to covid
17:28 < LakshyaOjhaGitte> just heard US numbers reached 2lac
17:29 < jeffin143[m]> 7 lakh people in 2.1 sq km
17:30 < LakshyaOjhaGitte> pretty bad :(
17:30 < rcurtin> I live in Atlanta and here there is a shelter-in-place order, which basically means "no non-essential trips", so, I'm only going out to buy groceries every two weeks or so
17:31 < rcurtin> I don't currently know anyone who is infected; the most tests are being done in New York, not where I am in Georgia. I think there are far more infections than are reported here, but that's just a guess
17:31 < rcurtin> for me life is not changing much, I already worked from home many days, now I work from home every day :)
17:32 < abhisaphire[m]> Same scenario in india too rcurtin
17:32 < rcurtin> yeah, it's very unfortunate, and the worst thing is that the worst is yet to come :(
17:32 < jeffin143[m]> rcurtin (@freenode_rcurtin:matrix.org): exactly testing is done less then expected and hence less positive cases
17:33 < jeffin143[m]> Yes , I hope some one comes up with some vaccination or something
17:33 < PrinceGuptaGitte> I live in Delhi, India, and there is a known community outbreak now. I'm not going out anymore
17:33 < LakshyaOjhaGitte> it might take months to curb this outbreak and vaccine is not even expected before an year from now.
17:34 < PrinceGuptaGitte> Worst part about this virus is, it is very infectious, high mortality rate of around 30% and convenient incubation period of over a week(for it to spread)
17:34 < LakshyaOjhaGitte> fact: the fastest vaccine that broke record was discovered in 5 years time
17:35 < rcurtin> one good thing is that lack of widespread testing means that the reported mortality rate is likely a pretty loose upper bound
17:35 < rcurtin> (it's still far higher than anyone would hope though!)
17:36 < PrinceGuptaGitte> Indeed.
17:36 < jeffin143[m]> I wish it would have as easy as writing a simple snippet of code
17:39 < jeffin143[m]> Ig , after everything come back to normal there would be a shift in software industry
17:39 < jeffin143[m]> More people would be looking for work for home perks
17:40 < rcurtin> yeah, that's actually something I'm really hopeful about. it might help with traffic and air pollution :)
17:43 < abhisaphire[m]> Indeed the only good side of this lockdown is rejuvenation of nature
17:45 < LakshyaOjhaGitte> People living in and around delhi can see the real difference in Air pollution.
17:46 < LakshyaOjhaGitte> Ppl here are happy with AQI of 158 which means unhealthy XD
17:54 < LakshyaOjhaGitte> Just to lighten up the mood, here's a funny video https://www.youtube.com/watch?v=BK0KcFu1Dtk
17:55 < LakshyaOjhaGitte> XD
17:57 < PrinceGuptaGitte> Hi @kartikdutt18, sorry to ping you again, but can you tell me how can i link mlpack in the xeus-cling notebook?'
17:57 < PrinceGuptaGitte> (edited) ... xeus-cling notebook?' => ... xeus-cling notebook?
17:57 < PrinceGuptaGitte> (edited) ... you tell ... => ... you please tell ...
18:00 < zoq> PrinceGuptaGitte: See the pragma on the top of the notebook.
18:02 < PrinceGuptaGitte> I can't find it
18:03 < LakshyaOjhaGitte> Hey @zoq wanted to ask you, is there any good in implementing the fold/unfold layer in mlpack?
18:03 < LakshyaOjhaGitte> I think it is good for n dimension data only, am not sure.
18:04 < LakshyaOjhaGitte> sorry to interrupt your conversation @prince776
18:05 < PrinceGuptaGitte> It's okay :)
18:06 * kartikdutt18Gitt sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/wiLdxUYMkZSXPWgFdvEpkebC >
18:07 < PrinceGuptaGitte> Thanks
18:10 < zoq> LakshyaOjhaGitte: What would that layer do?
18:11 < zoq> LakshyaOjhaGitte: And do you have a method in mind that would use that layer?
18:13 < LakshyaOjhaGitte> I think it is just used to reshape the input matrix and generate more layers/slices in it.
18:14 < LakshyaOjhaGitte> https://pytorch.org/docs/stable/nn.html#unfold , I think it is not useful in mlpack because of dimension limitation
18:16 < zoq> LakshyaOjhaGitte: In mlpack slides are encoded as cols, so on a first look I think you could use the subview layer to get the same result.
18:19 < kartikdutt18Gitt> I agree, since layers pass matrices so we can we simply use subview or change input-params for the next layer (if possible).
18:20 < LakshyaOjhaGitte> So in end it is not advisable to implement this layer here.
18:20 < LakshyaOjhaGitte> It was good that I asked, got everything clarified.
18:21 < zoq> LakshyaOjhaGitte: At least I don't see a need for the layer right now, maybe there is some need in the future.
18:55 -!- togo [~togo@2a02:6d40:34f8:8901:15a8:dfcc:4e7f:de10] has joined #mlpack
19:07 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
20:11 < LakshyaOjhaGitte> Okay
20:11 < LakshyaOjhaGitte> Thanks
20:34 < metahost> rcurtin: Interesting! Versioning is a mysterious task. :P
21:01 < rcurtin> metahost: yeah, more work than I would hope, but it's a necessary evil :)
21:22 -!- louisway [email@example.com] has joined #mlpack
21:37 < metahost> Haha
21:49 -!- louisway [firstname.lastname@example.org] has quit [Remote host closed the connection]
22:54 -!- togo [~togo@2a02:6d40:34f8:8901:15a8:dfcc:4e7f:de10] has quit [Quit: Leaving]
--- Log closed Thu Apr 02 00:00:55 2020