Skip to main content

Sneaky Machine Learning Reading Group

He Wang, Tom Kelly, and their PGR students meet every Friday afternoon for the Machine Learning in 3D reading group. In each meeting, one person presents a cutting edge machine learning paper from any field. It is an opportunity to share knowledge and practice academic communication skills.

This page contains the contents of the past meetings (links to slides and meeting notes) and information of the upcoming meetings.

Time and Place

Online: Friday @ 15.30pm.

Upcoming meeting

2020 Meetings

31st Jan: Progressive Growing of GANs for Improved Quality, Stability, and Variation

Progressive Growing of GANs for Improved Quality, Stability, and Variation

Presenter: Jialin

Date: 31st Jan

Minutes:

He: original GAN can hardly learn the mixture of 2 distributions well.

Jialin: U-net concatenates left to the right at the same layer.

Tom: these connections maintain spatial resolution.

He: why weight of two ways is not a learnable parameter?

Jialin: Always fix the earlier layers when train the new layers.

He: Now that use means why calculate stand deviation?

Tom: Low resolution determines features.

 

 

7th Feb: Efficient N-Dimensional Convolutions via Higher-Order Factorization

Efficient N-Dimensional Convolutions via Higher-Order Factorization

Presenter: Zheyan

Date 7th Feb

Contents:

1 How to do convolution for dimension 0-3 tensors.

2 The overall convolution filter for a rank n tensor is a rank 2n tensor and why decompose it.

3 Some methods to decompose filters for n dimensions tensors.

15th Feb: TempoGAN

tempoGAN

Presenter: Dody

Date 15th Feb

After the presentation please try to give your opinion on the paper!Contents:

TempoGAN projects the low resolution simulation result to high resolution result. Compared with past CNN, this module take 3 frames in time series as input.

Discussion:

Why there is dimension inconsistency?

Some region of TempoGAN result is worse than baseline. It is hard to say which result is better.

It is better to present one paper with more details than 2 papers.

21st Feb: Talking Face Generation by Adversarially Disentangled Audio-Visual Representation

Talking Face Generation by Adversarially Disentangled Audio-Visual Representation

Presenter: Feixiang

Discussion:

The words are embedded into ID format.

What is the training time?

The encoder and classifier are trained individually.

How to separate speech information and left words information.

Is there evidence  that shows disentangle really works?

In the video there is minor change other than the mouse and bleezing eyes.

28st Feb: Generating Adversarial Examples By Makeup Attacks on Face Recognition

GENERATING ADVERSARIAL EXAMPLES BY MAKEUP ATTACKS ON FACE RECOGNITION

Presenter: Baiyi

Discussion:

This is CycleGAN, networks used for creating dataset.

Constraint GAN means difficulty in training.

Possible build two datasets, with and without makeup.

The purpose of attack: avoid recognition, update defense system.

6th Mar: Yarn-Level Simulation of Woven Cloth

Yarn-Level Simulation of Woven Cloth

Presenter: Deshan

Main content:

The difference of knit and woven and the reason to do yarn level simulation.

What are the Lagrangian and Euler DoFs.

The contact plane is used to compute contact force.

What is visco-elastic property.

13th March: Adversarial Examples Are Not Bugs, They Are Features

Adversarial Examples Are Not Bugs, They Are Features

After presentation everyone please try to give some opinion on the paper.

Presenter: Yunfeng

27th March: Proximal Policy Optimization

3th April: GAN Compression -Efficient Architectures for interactive Conditional GANs

Paper: GAN Compression -Efficient Architectures for interactive Conditional GANs.

Presenter: Jialin

Contents:

Traditional GAN conditional GAN and Pix2Pix GAN.

Previous methods to reduce calculation

Validation to choose proper student NN.

Decouple training and search.

 

10th April: Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control

Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control

Presentor: Zheyan

Contents:

Flow control and active control background, it is hard to design active control.

experiment setting.

Reinforcement algorithms. The output of ANN is the distribution of control strategy. Optimization loss is the long term performance. The samplle strategy number increase over time and seems training expense continues to grow.

24th April: Latent-space Dynamics for Reduced Deformable Simulation

Latent-space Dynamics for Reduced Deformable Simulation

Presenter : Dody

Contents:

Can we use ML to accelerate hypeelastic simulation?

Traditional Reduce order method use PCA.

How to integrate autoencoder.

Questions:

Why do not use autoencoder alone but with PCA?

What does J means?

 

1st May: TransMoMo

TransMoMo

Presentor: Feixiang

8th May: Differentiable Cloth Simulation for Inverse Problems

Differentiable Cloth Simulation for Inverse Problems

Presenter: Deshan

Contents:

Concepts of discretization, implicit Euler , collision detection and responce

It usually need labor work to adjust the simulation parameters. The work aims automatically adjust parameters.

Dynamic and Continuous collision dectection, QR decomposion

Derivatives of the physics solve. Constrains, KKT condition.

Experiments

15th May: Deep Blending for Free-Viewpoint Image-Based-Rendering

Deep Blending for Free-Viewpoint Image-Based-Rendering

 

Presenter: Jialin

Contents:

After the presentation please try to give your opinion on the paper!Traditional image blending.

Combine MVS and IBR.

Deep Blending, 4 challenges in old methods.

Two offline pipeline, one online.

3D geometry reconstruction.

DNN architecture.

Rendering algorithm.

Two novel training loss.

Limitation.

22nd May: Accelerated design and characterization of non-uniform cellular materials via a machine-learning based framework

Accelerated design and characterization of non-uniform cellular materials via a machine-learning based framework

After the presentation please try to give your opinion on the paper!

Presenter: Zheyan

29th May: MeshingNet

MeshingNet by Zheyan

The question asked in the presentation:

  1. How is the algorithm of the solution?
  2. How does the network connect to the mesh?
  3. When it breaks the triangle?
  4. Does it resize the triangle?
  5. Is the magic generator differentiable/ inside or outside the neural network?
  6. How do you make the particular neural network design?
  7. Why do you show training loss, rather than testing loss?
  8. If you dont show the testing with training, its hard to tell.

The comments on every slide:

2nd slide: To make the figures with less information/To make it a bit simpler.

3rd slides: What data are you actually training on? Use a real mesh instead of simplified mesh.

4th slide: what does 3000 mean?

6th slide: It needs motivation: how should it affect the mesh? Need examples.

7th slide: whats the baseline?

10th slide: make it look 3d.

11th slide: Why do the result by CZ have colour?

by Yunfeng

5th June: Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

Paper link:

Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

After the presentation please try to give your opinion on the paper!

Presenter: Yunfeng

Minutes:

  • Deshang : "why the algorithm 1 returns right value of binary search , instead of others, like middle value or left value?"
    • Yunfeng: I think it should return Vright or Vleft(it depends on if f(x0 + vmid theta)) == y0. And for binary search, the Vleft/Vright has been updated by Vmid already, so we don't need to return Vmid
  • He Wang:  "It is very interesting to see that they can prove an upper bound of the number of iterations needed. It would be good to show the proof if possible."
  • Jialin :  "They do not consider time series data. I agree that we need to consider some particular problems like weighting scheme when we attack time-series data."
    • The last paragraph in 3.2 mentioned that for high-dimensional problems, they sample 20 vectors from Gaussian distribution and average their estimators to get gˆ. This seems not to give any theoretical basis and seems to be based entirely on their experimental results? 
  • Feixiang:  "we can use a vector to indicate the direction, theta, but how to decide the length of the vector, is it with the same dimension of the input image?"
  • Dody : "they said: for high-dimensional problems, we found the estimation in equation (7) is very noisy" -> would u explain, what do they mean?
    • Yunfeng : they do the estimation of g-hat  q times,then average them. 

12th June: Lagrangian fluid simulation with continuous convolutions

 

Presenter: Dody Dharma

Paper link:

 

LAGRANGIAN FLUID SIMULATION WITH CONTINUOUS CONVOLUTIONS

 

19th June: Unpaired Motion Style Transfer from video to animation

Presenter: Feixiang

 

paper

26 June: Learning to Measure the Static Friction Coefficient in Cloth Contact

 

Learning to Measure the Static Friction Coefficient in Cloth Contact

This research proposes a vision-based deep neural network for estimating cloth/substrate dry friction coefficient. It uses synthetic datasets to train the neural network model which, however, is capable of estimating dry friction coefficient of real cloth. This is realised by accurately recover the cloth dynamics related to the dry friction coefficient. Moreover, this research introduces Conditional Friction Model which is optimised for extracting the information related with dry friction coefficient.

3rd July: PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

Jialin presents:

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

The back ground is to obtain HD images from low resolution. Pulse ensures the quality of the super resolution image while tradition methods lead to blurring, they only compare HR and LR images. Traditional MSE neglect detail such as texture so that they are smoothed out. To get rid it people increase distance between SR and HR or use perceptual loss.

This paper has new paradigm and use novel method including Gaussian domain search.

PULSE may illuminate some biases inherent in StyleGAN.

CVPR video/paper for discussion:

Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild

 

10th July: Dynamic Fluid Surface Reconstruction Using Deep Neural Network

Zheyan present:

Dynamic Fluid Surface Reconstruction Using Deep Neural Network

CVPR video for discussion:

 

17th July: BSP-Net: Generating Compact Meshes via Binary Space Partitioning

24th July: Use the Force, Luke! Learning to Predict Physical Forces by Simulating Effects

07 August: Skeleton-Aware Networks for Deep Motion Retargeting

14th August: Contrastive Learning for Unpaired Image-to-Image Translation

21st August: Neural Cages for Detail-Preserving 3D Deformations

4th September: RAFT: Recurrent All-Pairs Field Transforms for Optical Flow

11th September: Graph Neural Networks in Particle Physics

Dody talked on graph_CNN and how this technique may help our research.

Graph Neural Networks in Particle Physics

 

18th September: Long-term Human Motion Prediction with Scene Context

Feixiang presents:

Long-term Human Motion Prediction with Scene Context

25th September: SRFlow: Learning the Super-Resolution Space with Normalizing Flow

9th October: Attention Is All You Need

Zheyan Zhang presents.

Attention Is All You Need

 

16th Oct: learning mesh-based simulation with graph convolution

23th Oct : Scalable Graph Networks for Particle Simulations

11th Dec Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers

2021 meetings

15th January It Is Not the Journey but the Destination: Endpoint Conditioned Trajectory Prediction

22th January COVID-19 Cough Classification using Machine Learning and Global Smartphone Recordings

29th January Projective Dynamics with Contact

12th February Human Motion Classification Attack

Recent works on human motion classification attack

By He

26th February Lagrangian Neural Networks

19th March GCN Semi-Supervised Classification with Graph Convolutional Networks

26th March Neural Temporal Adaptive Sampling and Denoising

16th April Semantic Photo Manipulation with a Generative Image Prior

23th April Deep Learning in Audio Source Separation

Shaun present:

the field of my research with an overview of the problem and the current state of research surrounding it

30th April Efficient Transformers: A Survey

14th May Generative Layout Modeling using Constraint Graphs

21th May Editing in Style: Uncovering the Local Semantics of GANs

18th June Neural Operator For Parametric PDEs

Maria present:

27th Aug Machine learning–accelerated computational fluid dynamics

3rd Sep Unsupervised Image Generation with Infinite Generative Adversarial Networks

10th Sep OptNet: Differentiable Optimization as a Layer in Neural Networks

12th Nov Neural Animation Layering for Synthesizing Martial Arts Movements

19th Nov Finite element with machine learning

zheyan present:

Finite element with machine learning

26th Nov DeepM&Mnet: Inferring the electroconvection multiphysics fields based on operator approximation by neural networks

3rd Dec Masked Autoencoders Are Scalable Vision Learners

2021 meetings

11th Feb ChoreoMaster: Choreography-Oriented Music-Driven Dance Synthesis

18th Feb Understanding and mitigating gradient pathologies in physics-informed neural networks

4th Mar Spectral images based environmental sound classification using CNN with meaningful data augmentation

11th Mar Analyzing Inverse Problems with Invertible Neural Networks

27th May Image Classification using Graph Neural Network

Usman present:

17th June Prototypical contrast and reverse prediction: Unsupervised skeleton based action recognition

8th July Spatio-Temporal Gating-Adjacency GCN for Human Motion Prediction