Skip to main content

Registration for the INTEGRATE hack day is open

The event will take place at the Pufendorf Institute on Wednesday 5 October.

– Published 23 September 2016

The four topics to be discussed at the Hack Day:

  • Time adaptive Runge-Kutta methods -- Philipp Birken
  • Motion Capture camera placement optimization -- Jens Nirme
  • Parallelization of a neural network code -- Mattias Ohlsson
  • Plotting points on a single image -- Alexey Bobrick

All four will be presented at the beginning of the day (see programme below). People will then split up into groups (part choice, part random, part seasonal shuffle) and discuss/work on the topics over the day. We will all meet up again mid-afternoon when the groups will tell us what they got up to, ideas they had and how it went in general.

We have the main lecture hall but also other rooms in the Pufendorf booked, so therewill be plenty of space to spread out.

Bring your laptops. Note that the Pufendorf has eduroam as its wifi network.

The day will be fueled by coffee, tea, and a sandwich lunch.

This will be a day for those who want to work together to code and solve problems.

IMPORTANT POINT:  PhD students and postdocs in your research group may be particularly interested in this event, so please spread the word.

Those wishing to attend what promises to be a fun and interesting day should REGISTER by sending an email with subject  "Registration for 5 October INTEGRATE" to Melvyn B. Davies (mbd@astro.lu.se).

Note that as places are limited, you are encouraged to register earlyto avoid disappointment.  In any case please register by 12.00 on Friday 30 September.  The number are needed for the food and drink.

HACK DAY -- Wednesday 5 October

Location: Pufendorf Institute, Lund University

09.30-10.00 Tea/coffee

10.00-10.05 Melvyn B. Davies -- Welcome and introduction to INTEGRATE Hack DayIntroduction to the topics:

10.05-10.10 Time adaptive Runge-Kutta methods -- Philipp Birken

10.10-10.15 Motion Capture camera placement optimization -- Jens Nirme

10.15-10.20 Parallelization of a neural network code -- Mattias Ohlsson

10.20-10.25 Plotting points on a single image -- Alexey Bobrick

10.25-12.30 group discussions/work

12.30- 13.00 Lunch sandwiches at Pufendorf

13.00-14.10 further group discussions and groups prepare presentations

Progress reports:

14.10-14.30 Time adaptive Runge-Kutta methods -- group member(s)

14.30-14.50  Motion Capture camera placement optimization -- group member(s)

14.50-15.10  Parallelization of a neural network code -- group member(s)

15.10-15.30  Plotting points on a single image -- group member(s)

15 minutes max for the group presentations + at least 5 minutes for discussion

15.30-16.00 Tea/coffee

HACK-DAY TOPICS

Topic A) Time adaptive Runge-Kutta methods -- Philipp Birken (Mathematics)

The TimEadaptive iMPlicit sOlver system TEMPO is a C++ library for the numerical solution of initial value problems using time adaptive implicit and explicit Runge-Kutta methods where for the nonlinear systems, Jacobian-free Newton-Krylov methods are used. In the development process, we use a semiprofessional workflow using a ticket system (Redmine), a version control system (Subversion) and manual regression tests. This will be explained and then you can test some tickets yourself.

Topic B) Motion Capture camera placement optimization -- Jens Nirme (The Humanities Laboratory)

The MOCAP system in the Humanities Lab can be used to record all kinds of movement. The default camera placement (8 IR-cameras evenly distributed around the walls) works well to record subjects moving around in the room. However, in other cases we are interested in capturing specific movements with higher fidelity or in ensuring data capture despite occlusion from some cameras. We have had to customize the camera placement for recording co-speech gestures, for example, where self-occlusion is a problem. This process is time-consuming, and what is optimal could change with just slightly different constraints (like a chair with armrests). We want to be able to predict strengths and weaknesses with specific set-ups before re-arranging the MOCAP studio. A search algorithm finding optimal camera positions and orientations in a simulated 3D-environment would be very useful. Samples of digitally animated 3D-model with IR-markers can be provided. It is hardly necessary to model the MOCAP software’s calculations of 3D-trajectories. To keep things simple, the fitness function used in the search can be defined as how many of the markers are visible by at least two cameras. 

Topic C) Parallelization of neural network code -- Mattias Ohlsson (Astronomy & Theoretical Physics)

We have code for training ensembles of neural networks, mostly for various types of classification problems. An ensemble is just a collection of independently trained networks, then combined into a final model.Currently all training in the code is done i serial. We wish to adapt the code such that training of individual members of the ensemble can be run in parallel on a multi-core system. The code is written in C.

Topic D) Plotting points on a single image -- Alexey Bobrick (Astronomy & Theoretical Physics)

A 800x600 image contains less than a million pixels.How to plot a dataset with a million points on such an image? This is about sharing expertise/techniques and trying a few ideas. We will provide a test dataset for 10^6 Gaia stars for the LMC. Some possible ways to go: Try using points with opacities; represent points as PSFs, similar to SPH; do contour plots with average point densities per pixels colour coded.