Tackling Longitudinal Data: Insights from My Time at Manchester Metropolitan University
- gilligk
- Apr 16
- 1 min read
For most researchers using large data sets, data cleaning is one of the most formidable
challenges. So, imagine my terror when, as a first year PhD student, I was confronted with
the copious quantity of information present in the Millennium Cohort study, an ongoing
investigation tracking the lives of X thousand young people born in the UK from 2000-2.
For this reason, I am extremely grateful that I was awarded funding from the COORDINATE
project which granted me the opportunity to spend two weeks at Manchester Metropolitan
University (MMU) where there is a wealth of expertise in secondary data analysis.
At MMU, I received an immense amount of guidance and support from my academic host,
Dr Lee Bently, who kindly provided training on working with large cohort datasets. Through
this training, I developed expertise in handling and merging multiple datasets with various
structures to create a manageable dataset suitable for longitudinal analysis in R studio.
Additionally, I also learnt how to implement techniques to handle missing data, such as
multiple imputation.
I was also extremely lucky to have been visiting at the same time as several other researchers. It was thanks to them that what would have been a daunting task of longitudinal data analysis became a much easier, collaborative activity.
Beyond improving my coding ability using R, a skill I am sure will prove useful in future
endeavours, this experience also gave me the chance to explore Manchester, where I learned a lot about the city’s history and enjoyed its wide variety of cuisine.
Overall, the experience was thoroughly enjoyable and one I am unlikely to forget.

Comments