Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG)
Phone: +49 351 210-2683
Yaser Afshar is a PhD student at the MOSAIC group since October 2012. He is an Iranian citizen and was born in 1980 in Tehran, Iran.
In 1998, Yaser was ranked 900th in the Iranian national university entrance exam among more than 1,000,000 participants. In 2004 Yaser received his Bachelor's degree in Mechanical Engineering from the K. N.Toosi University of Technology. He subsequently received his Master degree in Mechanical Engineering with major in Thermofluidics in 2007 from the Isfahan University of Technology (IUT). Yaser was ranked 1st among all Master students in Mechanical Engineering at his university.
Between 2007 and 2010, Yaser was lecturer for General Fluid Mechanics and for the Fluid Mechanics Lab at the Isfahan University of Technology. In parallel he was a research assistance at the Advanced Computing Research group at the Sheikh Bahaei National Supercomputing Center at IUT.
In 2011, Yaser did a 15-months research visit in the condensed matter theory group KOMET331 at the University of Mainz (Germany). During his stay there as a Max Planck IMPRS fellow he addressed the problem of communication overhead in massively parallel dissipative particle dynamics (DPD) simulations and developed an explicit algorithm for constant-pressure DPD simulations.
In the MOSAIC Group, Yaser implements parallel high-performance image-processing algorithms for segmentation and tracking of large 3D data sets using particle methods.
A Word from Yaser...
I am developing algorithms that provide real-time segmentation of large images for interactive microscopy and analysis. Real-time means that the processing speed exceeds the speed of image acquisition. The segmentation results can then directly provide quantitative information and a way of feedback-controlling the acquisition process in smart microscopes. This renders microscopy interactive again and closes the feedback cycle. I develop methods for real-time segmentation of large images that are based on distributing each image to multiple computers and using parallel high-performance resources. This is difficult because every computer holds only a part of the image, and communication between them needs to be orchestrated to guarantee global correctness of the final result.