Hi, I’m Billy. I’m doing my PhD in computer science with Zoran Tiganj at Indiana University, focusing on deep learning and natural language processing. On the pre-training side, I’m exploring ways to improve the efficiency and performance of large language models in long context modeling by incorporating compressive memory, particularly drawing inspiration from cognitive models of human memory. On the post-training side, I’m interested in evaluating the performance of large language models on tasks related to syntactic and semantic reasoning, and assessing the similarity between large multimodal models and humans on tasks inspired from cognitive science. Additionally, I’m broadly interested in retrieval augmented generation (RAG) and investigating how to integrate knowledge representations with large language models to enhance their understanding and inference capabilities. I’ve also recently become interested in infrastructure for training LLMs on large clusters, and functional programming in Racket.
I did my master’s in computational linguistics at Indiana University with Damir Cavar in the NLP Lab working on temporal and event reasoning and knowledge representations, and before that I did my bachelor’s at Michigan State University where I studied linguistics, TESOL, Chinese, and Korean, and did a summer language program at Harbin Institute of Technology.
Under Review:
[pdf] [website] [code] Dickson, B., Maini, S. S., Nosofsky, R., & Tiganj, Z. (under review). Comparing Perceptual Judgments in Large Multimodal Models and Humans. Submitted to Behavior Research Methods. https://doi.org/10.31234/osf.io/pcmrj
[code] Dickson, B., Mochizuki-Freeman, J., Kabir, M. R., & Tiganj, Z. (under review). Time-local Transformer. Submitted to Computational Brain & Behavior.
Publications:
[pdf] Cavar, D., Tiganj, Z., Mompelat, L. V., & Dickson, B. (2024). Computing Ellipsis Constructions: Comparing Classical NLP and LLM Approaches. Society for Computation in Linguistics, 7(1), 217-226. https://doi.org/10.7275/scil.2147
[pdf] Cavar, D., Aljubailan, A., Mompelat, L., Won, Y., Dickson, B., Fort, M., Davis, A., & Kim, S. (2022). Event sequencing annotation with TIE-ML. In Proceedings of the Eighteenth Joint ACL-ISO Workshop on Interoperable Semantic Annotation (ISA-18 2022). LREC 2022, Marseille, France.
[pdf] Cavar, D., Dickson, B., Aljubailan, A., & Kim, S. (2021). Temporal information and event markup language: TIE-ML markup process and schema version 1.0. In Proceedings of SEMAPRO 2021. Barcelona, Spain.
Presentations:
[pdf] Dickson, B., Maini, S. S., & Tiganj, Z. (2024). Comparing LLMs and Cognitive Models of Memory [Poster presentation]. Midwest Speech & Language Days, Ann Arbor, MI, United States.
[pdf] Dickson, B., Maini, S. S., Nosofsky, R., & Tiganj, Z. (2024). Comparing perceptual judgments in large multimodal models and humans [Poster presentation]. Midwest Computer Vision Workshop, Bloomington, IN, United States.
[pdf] Cavar, D., Abdo, M. S., & Dickson, B. (2024, March). Ellipsis in Arabic: Using machine learning to detect and predict elided words [Paper presentation]. 37th Conference of the Arabic Linguistic Society (ASAL), New York City, NY, United States.
[pdf] Dickson, B., Kim, S., Cavar, D., & Aljubailan, A. (2021). Temporal information and event markup language (TIE-ML) [Poster presentation]. Indiana University, Bloomington, IN, United States.
[pdf] Dickson, B. (2021, April). A simple annotation schema for temporal expressions [Presentation]. Central Kentucky Linguistics Conference, Lexington, KY, United States.
Teaching:
Summer 2024, Generative AI and Symbolic Knowledge Representations: Large Language Models, Knowledge, and Reasoning (ESSLLI 2024, Leuven, Belgium)
Spring 2022, Fall 2023, Spring 2024, Fall 2024, Associate Instructor, Data Mining (Indiana University)
Spring 2020, Adult Communicative Focused English (Michigan State University)