Publications

You can also find my articles on my Google Scholar profile.

Journal Articles


May I Ask a Question? MIA40K: A Large-Scale Educational Conversation Dataset and Generation Pipeline

Published in , 1900

Large Language Models (LLMs) have shown significant promise in educational applications, but their full potential is constrained by the limited availability of high-quality educational dialogue data, as traditional data collection methods rely heavily on human involvement. This paper presents a fully automated, highly scalable pipeline for generating educational conversation datasets. Our multi-step framework incorporates solution generation, verification, and dialogue synthesis with LLM-as-a-judge filtering to ensure quality control. Using this pipeline, we introduce MIA40K, a dataset of 39,526 teacher-student conversations focused on mathematics and science education. We evaluate our dataset’s conversational and educational quality through standard metrics and demonstrate its utility in educational dialogue tasks.

Recommended citation: Gamsız, A. F., Köksal, A., Korhonen, A., & Schütze, H. (2024). May I Ask a Question? MIA40K: A Large-Scale Educational Conversation Dataset and Generation Pipeline [Work in progress]
Download Paper

Backwards Planning from Onward Task Demonstrations via Vision-Language Models

Published in , 1900

In this paper, we propose a novel method of backward planning using visual-language models (VLM). Previous work on backward planning applied traditional methods that ignore the semantic meaning of manipulation tasks. Our proposed framework utilizes VLMs’ semantic understanding and physical reasoning capabilities to infer backward plans by analyzing onward task executions. Our method explores the barebone usage of these models and provides a comprehensive ablation study comparing the planning capabilities of common closed-source VLMs. We demonstrate that our system reaches an 80% success rate in two robotic manipulation tasks. We also observe that several state-of-the-art VLMs struggle significantly in visual understanding. This limitation still necessitates external embodiments for robust execution. However, the observed planning capabilities suggest that effective backward planning may not require highly complex architectures.

Recommended citation: Gamsız, A. F., Akkoç, D. B., Yıldırım, Y., & Uğur, E. (2025). Backwards planning from onward task demonstrations via vision-language models [Manuscript submitted for review].
Download Paper