He Wang, Tom Kelly, and their PGR students meet every Friday afternoon for the Machine Learning in 3D reading group. In each meeting, one person presents a cutting edge machine learning paper from any field. It is an opportunity to share knowledge and practice academic communication skills.
This page contains the contents of the past meetings (links to slides and meeting notes) and information of the upcoming meetings.
Date: 31st Jan
He: original GAN can hardly learn the mixture of 2 distributions well.
Jialin: U-net concatenates left to the right at the same layer.
Tom: these connections maintain spatial resolution.
He: why weight of two ways is not a learnable parameter?
Jialin: Always fix the earlier layers when train the new layers.
He: Now that use means why calculate stand deviation?
Tom: Low resolution determines features.
Date 7th Feb
1 How to do convolution for dimension 0-3 tensors.
2 The overall convolution filter for a rank n tensor is a rank 2n tensor and why decompose it.
3 Some methods to decompose filters for n dimensions tensors.
Date 15th Feb
After the presentation please try to give your opinion on the paper!Contents:
TempoGAN projects the low resolution simulation result to high resolution result. Compared with past CNN, this module take 3 frames in time series as input.
Why there is dimension inconsistency?
Some region of TempoGAN result is worse than baseline. It is hard to say which result is better.
It is better to present one paper with more details than 2 papers.
The words are embedded into ID format.
What is the training time?
The encoder and classifier are trained individually.
How to separate speech information and left words information.
Is there evidence that shows disentangle really works?
In the video there is minor change other than the mouse and bleezing eyes.
This is CycleGAN, networks used for creating dataset.
Constraint GAN means difficulty in training.
Possible build two datasets, with and without makeup.
The purpose of attack: avoid recognition, update defense system.
The difference of knit and woven and the reason to do yarn level simulation.
What are the Lagrangian and Euler DoFs.
The contact plane is used to compute contact force.
What is visco-elastic property.
After presentation everyone please try to give some opinion on the paper.
Paper: GAN Compression -Efficient Architectures for interactive Conditional GANs.
Traditional GAN conditional GAN and Pix2Pix GAN.
Previous methods to reduce calculation
Validation to choose proper student NN.
Decouple training and search.
Flow control and active control background, it is hard to design active control.
Reinforcement algorithms. The output of ANN is the distribution of control strategy. Optimization loss is the long term performance. The samplle strategy number increase over time and seems training expense continues to grow.
Presenter : Dody
Can we use ML to accelerate hypeelastic simulation?
Traditional Reduce order method use PCA.
How to integrate autoencoder.
Why do not use autoencoder alone but with PCA?
What does J means?
Concepts of discretization, implicit Euler , collision detection and responce
It usually need labor work to adjust the simulation parameters. The work aims automatically adjust parameters.
Dynamic and Continuous collision dectection, QR decomposion
Derivatives of the physics solve. Constrains, KKT condition.
After the presentation please try to give your opinion on the paper!Traditional image blending.
Combine MVS and IBR.
Deep Blending, 4 challenges in old methods.
Two offline pipeline, one online.
3D geometry reconstruction.
Two novel training loss.
After the presentation please try to give your opinion on the paper!
MeshingNet by Zheyan
The question asked in the presentation:
The comments on every slide:
2nd slide: To make the figures with less information/To make it a bit simpler.
3rd slides: What data are you actually training on? Use a real mesh instead of simplified mesh.
4th slide: what does 3000 mean?
6th slide: It needs motivation: how should it affect the mesh? Need examples.
7th slide: what’s the baseline?
10th slide: make it look 3d.
11th slide: Why do the result by CZ have colour?
Paper link:Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach
After the presentation please try to give your opinion on the paper!
Presenter: Dody Dharma
LAGRANGIAN FLUID SIMULATION WITH CONTINUOUS CONVOLUTIONS
Learning to Measure the Static Friction Coefficient in Cloth Contact
This research proposes a vision-based deep neural network for estimating cloth/substrate dry friction coefficient. It uses synthetic datasets to train the neural network model which, however, is capable of estimating dry friction coefficient of real cloth. This is realised by accurately recover the cloth dynamics related to the dry friction coefficient. Moreover, this research introduces Conditional Friction Model which is optimised for extracting the information related with dry friction coefficient.
Jialin presents:PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models
The back ground is to obtain HD images from low resolution. Pulse ensures the quality of the super resolution image while tradition methods lead to blurring, they only compare HR and LR images. Traditional MSE neglect detail such as texture so that they are smoothed out. To get rid it people increase distance between SR and HR or use perceptual loss.
This paper has new paradigm and use novel method including Gaussian domain search.
PULSE may illuminate some biases inherent in StyleGAN.
CVPR video/paper for discussion:Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild
Zheyan present:Dynamic Fluid Surface Reconstruction Using Deep Neural Network
CVPR video for discussion:
Dody will present:BSP-Net: Generating Compact Meshes via Binary Space Partitioning
CVPR video/paper for discussion:Learning Long-term Visual Dynamics, Xiaolong Wang.
Deshan presents:Use the Force, Luke! Learning to Predict Physical Forces by Simulating Effects
Looking this video:VIBE: Video Inference for Human Body Pose and Shape Estimation (CVPR 2020)
Feixiang presents:Skeleton-Aware Networks for Deep Motion Retargeting
Looking this video:Self-supervised Learning of Interpretable Keypoints from Unlabelled Videos - CVPR 2020 oral
Jialin presents:Contrastive Learning for Unpaired Image-to-Image Translation
Zheyan presents:Neural Cages for Detail-Preserving 3D Deformations
Deshan presents:RAFT: Recurrent All-Pairs Field Transforms for Optical Flow
Dody talked on graph_CNN and how this technique may help our research.Graph Neural Networks in Particle Physics
Zheyan Zhang presents.Attention Is All You Need
Ricardo Luna Gutierrez
presentsInformation-theoretic Task Selection for Meta-Reinforcement Learning
Recent works on human motion classification attack
Markus presents:Neural Temporal Adaptive Sampling and Denoising
Jialin present:Semantic Photo Manipulation with a Generative Image Prior
the field of my research with an overview of the problem and the current state of research surrounding it
He Wang present:Efficient Transformers: A Survey
Tom present:Generative Layout Modeling using Constraint Graphs
Baiyi present:Editing in Style: Uncovering the Local Semantics of GANs
Zheyan present:Machine learning–accelerated computational fluid dynamics
Finite element with machine learning
Maria present:DeepM&Mnet: Inferring the electroconvection multiphysics fields based on operator approximation by neural networks
Jiangbei present:Masked Autoencoders Are Scalable Vision Learners
Maria present:Understanding and mitigating gradient pathologies in physics-informed neural networks
Shaun present:Spectral images based environmental sound classification using CNN with meaningful data augmentation
Mou Li present:Analyzing Inverse Problems with Invertible Neural Networks
Feixiang present:Prototypical contrast and reverse prediction: Unsupervised skeleton based action recognition
Feixiang present:Spatio-Temporal Gating-Adjacency GCN for Human Motion Prediction