Researcher in Computational Social Media Analysis & Natural Language Processing
I am a Lecturer in the School of Computer Science and Mathematics at Kingston University, London.
Previously,I was a senior research officer at the University of Essex and Advance Higher Education Associate Fellow, where I work alongside Dr. Yunfei Long on " Improving multimodality misinformation detection with affective analysis " project funding by Alan Turing Insititute.
Before starting this role, I was doing my part-time PhD under the supervision of Dr. Arkaitz Zubiaga, worked as a part-time Teaching Fellow at Queen Mary University of London. And received Enrichment Scheme Placement award (in-person) in Alan Turing Institute. Record of high-quality research output in relevant conferences and journals in computational social science and NLP, such as ICWSM, World Wide Web and Online Social Networks and Media (2024 best survey paper). I was also a full-time mother of two teenagers and trained as rock climber instructor. It was where I developed strong time management skills and learned how to navigate unexpected chaos.
Before moving into academia, I spent ten years in the corporate world as a Software Engineer and Product Manager, working with some of the world’s leading companies. I gained experience in building solutions, solving problems, and surviving endless meetings (mostly without losing my sanity). Today, I combine that real-world experience with my research, blending practicality and curiosity to take on new challenges.
My research interests focus on developing innovative deep transfer learning and fair machine learning algorithms to mitigate online harm, such as cyberbullying, disinformation, and misinformation. I am particularly passionate about creating unbiased and transparent AI solutions that enhance responsible decision-making in online social platforms. My work involves exploring robust techniques to detect and prevent harmful content while maintaining fairness, interpretability, and adaptability across diverse social media platforms.
ID-XCB: Data-independent Debiasing for Fair and Accurate Transformer-based Cyberbullying Detection
Detecting harassment and defamation in cyberbullying with emotion-adaptive training