Michtom School of Computer Science

Research Labs

In addition to our faculty's individual research, much research and development takes place in computer science laboratories. Some of these labs are highlighted below.

Brandeis Visual Analytics Laboratory

The Brandeis Visual Analytics Lab (BraVA), led by Professor Dylan Cashman, uses human-centered research methods to develop novel visual encodings for heterogeneous data in order to broaden accessibility of data science.  Previous projects included the development of interpretable interfaces to neural architecture searches and other types of artificial intelligence metalearning.  Ongoing projects include automated tools for visualizing processes within data science workflows and the design and implementation of tactile graphics for data analysis by blind and visually impaired users.  Website coming soon!

Brandeis Autonomous Robotics Teaching Laboratory

Colorful robotThe COSI 119a Brandeis Autonomous Robotics Lab, led by Professor Pito Salas, is focused on teaching robotics theory and practice. Students and faculty work side by side on robotics projects large and small. The lab takes a "multi-cohort multi-semester”" approach to teaching, which allows students to learn robotics while helping to build the lab's major project, the Campus Rover. The Campus Rover initiative investigates issues of indoor and outdoor navigation using inexpensive robot platforms.

The Core Machine Learning Lab

The Core Machine Learning Lab, led by Professor Hongfu Liu, at Brandeis University's Michtom School of Computer Science focuses on cutting-edge core research in machine learning and artificial intelligence-assisted applications. Our research spans various domains including consensus clustering, constrained clustering, balanced clustering, multi-view clustering, interpretable clustering, and fair clustering. Currently, we are working on data-centric learning, which focuses on improving machine learning models by prioritizing the quality and diversity of the data used for training and evaluation, rather than exclusively refining algorithms. This approach recognizes that well-annotated, diverse, and representative datasets are crucial for building reliable and fair models. By addressing issues such as label noise, class imbalance, and missing data, data-centric learning seeks to enhance model performance and generalization capabilities.

Multilingual Language Processing (MLP) Lab

The Multilingual Language Processing Lab, led by Prof. Nianwen Xue, conducts research at the intersection of linguistics and Natural Language Processing (NLP). The lab’s work centers on two key, interrelated activities: first, developing linguistically annotated datasets that enable the training of robust machine learning models, and second, creating novel machine learning methods to address complex NLP problems. Notable past projects include the development of the Chinese Treebank, the Chinese Proposition Bank, Abstract Meaning Representation (AMR) parsers, and models for temporal and modal dependency annotation. Current projects include the  Uniform Meaning Representation (UMR) initiative, which seeks to create a unified framework for representing meaning across a wide range of languages; media framing analysis; and fact-checking for news reports generated by Large Language Models (LLMs).

The Smart and Scalable Data Systems (SSD) Lab

The Smart and Scalable Data systems (SSD) lab, led by Professor Subhadeep Sarkar, focuses on cutting-edge research problems in databases and data management. Data is at the heart of today’s compute-driven world. At SSD lab, we design, build, and tune data systems that can offer superior performance in the face of ever-growing data size and ever-evolving performance requirements. The lab has two primary research axes. The first axis aims to build data systems that can self-adapt and self-tune to offer optimal performance by learning the workload characteristics on the fly. The second axis strives to build data systems that protect the users’ data privacy by design. This is a particularly important research quest in a world where data privacy protection has become increasingly critical, and enabling privacy-by-design as system property requires fundamental changes at the core of the storage engine. SSD lab is a part of the database community and regularly publishes in top database conferences and journals, including ACM SIGMOD, PVLDB, IEEE ICDE, EDBT, ACM TODS, IEEE DEBull, and so on.

Computational Systems Biology Group

The Computational Systems Biology Group takes an integrated approach, which combines computational and experimental methods to understand causal and functional relationships that regulate the dynamics of biological networks and lead various extra-cellular stimuli to numerous cellular phenotypes.

Dynamical & Evolutionary Machine Organization

The Dynamical & Evolutionary Machine Organization (DEMO) lab focuses on Artificial Intelligence, machine learning, and Artificial Life. Jordan Pollack has published in numerous subfields including neural networks, cellular automata, game learning, educational technology, evolution and co-evolution, and robotics. DEMO’s automatically designed and fabricated robots was published in Nature in August, 2000 and made front page news worldwide. It was the second of 6 generations of “Genetically Organized Lifelike Electro-Mechanics” known as the GOLEM project.  A persistent line of research in the lab has been the search for an open-ended arms-race to complexity using co-evolution, work which has resulted in numerous publications and a 2017 lifetime achievement award from the International Society for Artificial Life. Professor Pollack is retiring in 2025 and is not taking on any new PhD students.

Brandeis Laboratory for Linguistics and Computation

The Brandeis Lab for Linguistics and Computation (LLC) conducts research on the design and development of language models for semantic indexing, knowledge extraction and linguistically-based reasoning over large text collections. Theoretical work involves development of Generative Lexicon Theory and extensions of this theory to parsing and event-based inferencing.

 

Faculty Spotlight

Subhadeep Sarkar
Subhadeep Sarkar

Subhadeep Sarkar recently joined the faculty in the Computer Science department as an assistant professor. He works at the intersection of designing data layouts and access methods for storage engines and developing systems-level solutions for privacy protection in modern data systems. The goal of his research is to design efficient privacy-aware data systems by navigating the privacy-performance tradeoff and designing data structures and algorithms that support privacy by design. His research interests span data systems, storage layouts, access methods, and their intersection with data privacy.

Before joining Brandeis, Subhadeep was a post-doctoral associate at Boston University.

Visit Subhadeep's website