Dmytro Mishkin
@ducha_aiki

Marrying classical CV and Deep Learning. I do things, which work, rather than being novel, but not working.




Dmytro Mishkin    @ducha_aiki
I have tried LoFTR for my datasets and it is pure magic. The longer post is going soon, just wanted to say "Wow!" Img-pair1: thermal vs visible + viewpoint change. Img-pair2: light+scale P.S. I just realised that LofTR can stand also for "Lord OF the Rings". Is it intentional? https://t.co/pblpZ63KD3

Dmytro Mishkin    @ducha_aiki
Whereas we got used to compare classic algorithms to deep learning speed like: Classic (CPU), e.g. SIFT - 0.x1 sec Fancy DL (GPU), TFeat - 0.x2 sec, do not forget that classics can be implemented with CUDA. E.g. CUDA-SIFT: 2.3 ms (!) per 1920x1080 img. https://t.co/EjBZfS0FHy

Dmytro Mishkin    @ducha_aiki
TIL that there is a journal version of AlexNet paper ... from 2017. Yes, after VGGNet, Inception, ResNet, BatchNorm https://t.co/mFD8kDiwpq P.S. OK, it is not even a "journal" aka extended version, it is reprint with an additional "prologue" featuring YLC famous reject.

Dmytro Mishkin    @ducha_aiki
CycleMLP: A MLP-like Architecture for Dense Prediction tl;dr: now it is fashionable to call your architecture as MLP, even when it has spatial structure == convolution with exotic kernel. https://t.co/B30GKdzLhb P.S. I haven't read this paper carefully, just wanted to conv-plain.

Dmytro Mishkin    @ducha_aiki
Update in kornia_moons: now you can draw inliers, tentative matches, non-matched features, epipolar lines and homography with a single function and in the same time. https://t.co/w3j2QTvL5T #kornia #wxbs

Dmytro Mishkin    @ducha_aiki
While you may not need an experienced #kaggle player (better GM) for the most of machine learning papers, you surely need one to be involved in "Deep learning beats XGBoost" kind of papers. Here is why

Dmytro Mishkin    @ducha_aiki
DFM: A Performance Baseline for Deep Feature Matching Ufuk Efe, Kutalmis Gokalp Ince, @aydinalatan Tl;dr GLU-Net without training: coarse-to-fine matching by VGGlayers, correspondences precisification, homography estimation -> repeat. #CVPR2021 #IMC2021 https://t.co/6VnGwcfOxR

Dmytro Mishkin    @ducha_aiki
Scale-invariant scale-channel networks: Deep networks that generalise to previously unseen scales Ylva Jansson, Tony Lindeberg tl;dr: extension of the scale-space theory to CNNs. Running the same network on scale-pyramid->max/avg pool is a way to go. https://t.co/MwQ9VUPthj

Dmytro Mishkin    @ducha_aiki
DGD-NET: Local Descriptor Guided Keypoint Detection Network Xiaotao Liu, Chen Meng, Feipeng Tian, Wei Feng Idea: learn matching score from the matching loss value, rather than indirectly as in R2D2. I like the idea, although evaluation is poor, HP-only https://t.co/GHHlY9qjbQ

Dmytro Mishkin    @ducha_aiki
Review question. Imagine there are two apparently disconnected subareas of research: A and B. They rediscover similar methods,don't talk to each other. Suddenly person RA from A reviews paper B. Should RA demand proper ack and comparison to methods from A, maybe question novelty?

Dmytro Mishkin    @ducha_aiki
Do you know any good pretrained model for surface normals estimation? Pytorch preferred











 








Dmytro Mishkin

Facebook AI

hardmaru

Yann LeCun

Aran Komatsuzaki

Thomas Wolf