3DML reading group

Introduction

He Wang, Tom Kelly, and their PGR students meet every Friday afternoon for the Machine Learning in 3D reading group. In each meeting, one person presents a cutting edge machine learning paper from any field. It is an opportunity to share knowledge and practice academic communication skills.

This page contains the contents of the past meetings (links to slides and meeting notes) and information of the upcoming meetings.

Upcoming meeting

Time and Place

Online: Friday, 15th May @ 15.30pm

Paper link:

Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

After the presentation please try to give your opinion on the paper!

Presenter: Yunfeng

Presenter order:  Dody->Baiyi->Feixiang->Deshan -> Jialin->Zheyan

Past Meetings

31st Jan: Progressive Growing of GANs for Improved Quality, Stability, and Variation

Progressive Growing of GANs for Improved Quality, Stability, and Variation

Presenter: Jialin

Date: 31st Jan

Minutes:

He: original GAN can hardly learn the mixture of 2 distributions well.

Jialin: U-net concatenates left to the right at the same layer.

Tom: these connections maintain spatial resolution.

He: why weight of two ways is not a learnable parameter?

Jialin: Always fix the earlier layers when train the new layers.

He: Now that use means why calculate stand deviation?

Tom: Low resolution determines features.

 

 

7th Feb: Efficient N-Dimensional Convolutions via Higher-Order Factorization

Efficient N-Dimensional Convolutions via Higher-Order Factorization

Presenter: Zheyan

Date 7th Feb

Contents:

1 How to do convolution for dimension 0-3 tensors.

2 The overall convolution filter for a rank n tensor is a rank 2n tensor and why decompose it.

3 Some methods to decompose filters for n dimensions tensors.

15th Feb: TempoGAN

tempoGAN

Presenter: Dody

Date 15th Feb

After the presentation please try to give your opinion on the paper!Contents:

TempoGAN projects the low resolution simulation result to high resolution result. Compared with past CNN, this module take 3 frames in time series as input.

Discussion:

Why there is dimension inconsistency?

Some region of TempoGAN result is worse than baseline. It is hard to say which result is better.

It is better to present one paper with more details than 2 papers.

21st Feb: Talking Face Generation by Adversarially Disentangled Audio-Visual Representation

Talking Face Generation by Adversarially Disentangled Audio-Visual Representation

Presenter: Feixiang

Discussion:

The words are embedded into ID format.

What is the training time?

The encoder and classifier are trained individually.

How to separate speech information and left words information.

Is there evidence  that shows disentangle really works?

In the video there is minor change other than the mouse and bleezing eyes.

28st Feb: Generating Adversarial Examples By Makeup Attacks on Face Recognition

GENERATING ADVERSARIAL EXAMPLES BY MAKEUP ATTACKS ON FACE RECOGNITION

Presenter: Baiyi

Discussion:

This is CycleGAN, networks used for creating dataset.

Constraint GAN means difficulty in training.

Possible build two datasets, with and without makeup.

The purpose of attack: avoid recognition, update defense system.

6th Mar: Yarn-Level Simulation of Woven Cloth

Yarn-Level Simulation of Woven Cloth

Presenter: Deshan

Main content:

The difference of knit and woven and the reason to do yarn level simulation.

What are the Lagrangian and Euler DoFs.

The contact plane is used to compute contact force.

What is visco-elastic property.

13th March: Adversarial Examples Are Not Bugs, They Are Features

Adversarial Examples Are Not Bugs, They Are Features

After presentation everyone please try to give some opinion on the paper.

Presenter: Yunfeng

27th March: Proximal Policy Optimization

3th April: GAN Compression -Efficient Architectures for interactive Conditional GANs

Paper: GAN Compression -Efficient Architectures for interactive Conditional GANs.

Presenter: Jialin

Contents:

Traditional GAN conditional GAN and Pix2Pix GAN.

Previous methods to reduce calculation

Validation to choose proper student NN.

Decouple training and search.

 

10th April: Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control

Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control

Presentor: Zheyan

Contents:

Flow control and active control background, it is hard to design active control.

experiment setting.

Reinforcement algorithms. The output of ANN is the distribution of control strategy. Optimization loss is the long term performance. The samplle strategy number increase over time and seems training expense continues to grow.

24th April: Latent-space Dynamics for Reduced Deformable Simulation

Latent-space Dynamics for Reduced Deformable Simulation

Presenter : Dody

Contents:

Can we use ML to accelerate hypeelastic simulation?

Traditional Reduce order method use PCA.

How to integrate autoencoder.

Questions:

Why do not use autoencoder alone but with PCA?

What does J means?

 

1st May: TransMoMo

TransMoMo

Presentor: Feixiang

8th May: Differentiable Cloth Simulation for Inverse Problems

Differentiable Cloth Simulation for Inverse Problems

Presenter: Deshan

Contents:

Concepts of discretization, implicit Euler , collision detection and responce

It usually need labor work to adjust the simulation parameters. The work aims automatically adjust parameters.

Dynamic and Continuous collision dectection, QR decomposion

Derivatives of the physics solve. Constrains, KKT condition.

Experiments

15th May: Deep Blending for Free-Viewpoint Image-Based-Rendering

Deep Blending for Free-Viewpoint Image-Based-Rendering

 

Presenter: Jialin

Contents:

After the presentation please try to give your opinion on the paper!Traditional image blending.

Combine MVS and IBR.

Deep Blending, 4 challenges in old methods.

Two offline pipeline, one online.

3D geometry reconstruction.

DNN architecture.

Rendering algorithm.

Two novel training loss.

Limitation.

22nd May: Accelerated design and characterization of non-uniform cellular materials via a machine-learning based framework

Accelerated design and characterization of non-uniform cellular materials via a machine-learning based framework

After the presentation please try to give your opinion on the paper!

Presenter: Zheyan

29 May by Zheyan

MeshingNet

The question asked in the presentation:

  1. How is the algorithm of the solution?
  2. How does the network connect to the mesh?
  3. When it breaks the triangle?
  4. Does it resize the triangle?
  5. Is the magic generator differentiable/ inside or outside the neural network?
  6. How do you make the particular neural network design?
  7. Why do you show training loss, rather than testing loss?
  8. If you dont show the testing with training, its hard to tell.

The comments on every slide:

2nd slide: To make the figures with less information/To make it a bit simpler.

3rd slides: What data are you actually training on? Use a real mesh instead of simplified mesh.

4th slide: what does 3000 mean?

6th slide: It needs motivation: how should it affect the mesh? Need examples.

7th slide: whats the baseline?

10th slide: make it look 3d.

11th slide: Why do the result by CZ have colour?

by Yunfeng