December 10

Add to Calendar 2019-12-10 13:00:00 2019-12-10 14:00:00 America/New_York Mesomatters: Design, Manufacture and Interact with Mesoscopic Materials Abstract:Between traditional industrial design, which operates at the macro scale (cm to m), and material engineering, which operates at the micro/nano scale (μm to nm), is the emerging design space of mesoscale. It is the scale of human hair or a grain of sand. It is the scale where material properties meet human perception, and rational meets intuition. In the past 10 years, additive manufacturing, especially 3D printing, enables designers to directly manipulate geometries at this scale. Yet the existing design and manufacturing methods could not unleash the full potential of mesoscale materials for the design world.In the talk I propose a material-driven design methodology that employs additive manufacturing to design materials at mesoscale for interaction and product design. The ability to programmably assemble materials with tailored structures at the centimeter, millimeter, and micrometer length scales enables tunable mechanical and electrical properties. Those properties determine not only the static performance, but also, when energized, the dynamic shape-change of a material. The emerging material performance and behavior allows us to design unprecedented objects and environments with input (sensing) and output (actuation) capabilities, which can be integrated for the next generation of human-computer interfaces.Bio:Jifei Ou (欧冀飞) is a designer, researcher and recent enterpreneur. His works focus on designing and fabricating transformable materials across scales (from μm to m). As much as his work is informed by digital technology, he is inspired in equal measure by the natural world around him. He has been leading projects that study bio-mimicry and bio-derived materials to design shape-changing packaging, garments and furniture.Jifei was born and raised in southwest China and has brought his design practice and scientific research to Asia, Europe and the U.S. His works have been published in academic conferences such as User Interface Software and Technology (UIST, 2013, 2016), Tangible Embodied and Embedded Interaction (TEI, 2014 & 2016) and Computer-Human Interaction (CHI, 2015 & 2016); awarded by design competitions such as A’ Design Award (2016, 2017), FastCo IBD award (2016, 2017, 2018), IXDA award (2016), etc. He has been organizing workshops on shape-shifting materials with researchers, high school students and artists around the world. He is also deeply involved in the manufacturing community in Shenzhen in order to facilitate the real world application of his research.Jifei holds an Ph.D and M.S. from the MIT Media Lab, and a Diplom in Design from the Offenbach University of Art and Design in Germany. Kiva (32-G449)

November 05

Add to Calendar 2019-11-05 13:00:00 2019-11-05 14:00:00 America/New_York Prototyping Mixed Reality Experiences AbstractIn this talk, I will illustrate my vision of mixed reality prototyping, describe the anatomy of mixed reality prototypes and what can be learned and how, with relatively little time and effort. I will start with a brief overview of the HCI research focused on mixed reality interfaces in my lab at Michigan over the last three years. I will then structure the talk around recent projects exploring how to enable rapid prototyping of mixed reality interfaces with limited technical skill and no need for programming. In particular, I will describe techniques for AR/VR content creation from paper mockups and Play-Doh models with support for Wizard of Oz via live video streaming, and techniques for collaborative, immersive authoring of 3D scenes using AR/VR devices as puppets to make virtual objects interactive without programming. Based on these projects, I hope to illustrate possible directions to enable broader participation in the design process by empowering non-technical designers to create complex mixed reality experiences.BioMichael Nebeling (http://michael-nebeling.de) is an Assistant Professor at the University of Michigan where he leads the Information Interaction Lab (https://mi2lab.com). His current research is focused on creating new techniques, tools, and technologies to make AR/VR interface development easier and faster. Michael's vision is that anyone without 3D modeling, animation, or programming background can be an active participant in AR/VR design. His work has received nine Best Paper Awards and Honorable Mentions at the premier HCI conferences. He regularly serves on the program committees of the ACM CHI, UIST, and EICS conferences. He received a 2018 Disney Research Faculty Award and a Mozilla Research award. He joined Michigan in 2016 after completing a postdoc in the HCI Institute at Carnegie Mellon University and a PhD in the Department of Computer Science at ETH Zurich Star Room (32-D463)

October 29

Add to Calendar 2019-10-29 13:00:00 2019-10-29 14:00:00 America/New_York Improving the Music Listening Experience HCI Research at Spotify Abstract:Music plays an important role in everyday life around theworld. People rely on music to manage their mood,express their identity and celebrate milestone events.Streaming services like Spotify have transformed the waythat people consume audio by providing listeners withmultiple personalized ways to access an abundant catalogof content. In this talk, I will describe several active areas of HCI research at Spotify and present our work on understanding how people search for music and how we can enable exploration for listeners.Bio:Jenn Thom leads the HCI research lab at Spotify. Her current research interests include understanding how people search for and describe music and developing novel design and prototyping methods for conversational interactions. Prior to joining Spotify, she was a Research Scientist at Amazon where she worked on collecting and mining data to bootstrap new features for the launch of the Echo. She was also a Research Staff Member at IBM Research where she studied how employees used social networks for intercultural collaboration. Jenn received her PhD from Cornell University and her dissertation focused on how people expressed territorialbehaviors in user-generated content communities. Star Room (32-G463)

October 25

Add to Calendar 2019-10-25 16:00:00 2019-10-25 17:00:00 America/New_York Data Models of Sequential Tasks for User Interface Design Abstract: Today’s approaches in data-driven interface design translate observations in user behavior into interface features. However, little consideration is given to data models that are the computational foundations of these interactions. I will introduce end-to-end techniques that a) builds computational representations that capture the diverse and nuanced task context and b) use those representations as building blocks for designing interfaces. I will illustrate three different techniques with examples from understanding the landscape of using web-scale cooking instructions, understanding multiple users’ step-by-step demonstrations of a 3d modeling task, and designing voice interactions for tutorial videos.Speaker Bio:Minsuk Chang is a Ph.D. student in the School of Computing at KAIST. His research in HCI focuses on techniques for discovering, capturing, and structuring task context from user interaction data to create novel learning opportunities in the wild. He has previously interned at Adobe Research, Autodesk Research, and Microsoft Research. minsukchang.com Seminar Room G575

October 01

Add to Calendar 2019-10-01 13:00:00 2019-10-01 14:00:00 America/New_York What if software were Different? From Applications to Computational Media. Abstract:The concept of applications is ubiquitous and completely taken for granted in modern computing. Software doesn't have to be synonymous with applications, and there is great potential to be unlocked if we break out of them. In this talk, I will argue for a renewed focus on developing computational media and show efforts we have undertaken to demonstrate how software can be made differently. I will, among other things, present our work on Webstrates (webstrates.net), its authoring environmentCodestrates (codestrates.org), and a data visualization environment built using these called Vistrates (vistrates.org).Bio:Clemens Nylandsted Klokmose is an associate professor in the Department of Digital Design and Information Studies at Aarhus University. Clemens has worked as a postdoc at Computer Science, Aarhus University and at Laboratoire de Recherche en Informatique, Université Paris-Sud. He has furthermore spent a year as a user interface specialist in the software industry. Clemens received his PhD in Computer Science in 2009 from Aarhus University supervised by prof. Susanne Bødker.Clemens’ main interest is the fundamentals of interactive computing, particularly to support and understanding computing with multiple devices and multiple people. Many of his ideas are crystallised into the Webstrates platform (webstrates.net), which he leads the development of. Kiva Room (32-G449)

September 24

Add to Calendar 2019-09-24 13:00:00 2019-09-24 14:00:00 America/New_York Data Feminism AbstractAs data are increasingly mobilized in the service of global corporations, governments, and elite institutions, their unequal conditions of produc- tion, their inequitable impacts, and their asymmetrical silences become increasingly more apparent. It is precisely this power that makes it worth asking: "Data science by whom? For whom? In whose interest? Informed by whose values?" And most importantly, "How do we begin to imagine alternatives for data’s collection, analysis, and communication?" These are some of the questions that emerge from what Lauren Klein and I call Data Feminism (forthcoming from MIT Press in early 2020). Data feminism is a way of thinking about data science and its products that is informed by the past several decades of intersectional feminist activism and critical thought, emerging anti-oppression design frameworks, and scholarship from the fields of Critical Data Studies, Science & Technology Studies, Geography/GIS, Digital Humanities and Human Computer Interaction. An intersectional feminist lens prompts questions about how, for instance, challenges to the male/female binary can also help challenge other binary (and empirically wrong) classification systems. It encourages us to ask how the concept of invisible labor can help to expose the gendered, racialized, and colonial forms of labor associated with data work. And it demonstrates why the data never, ever, speak for themselves. In this talk, I will introduce seven principles for data feminist work: examining and challenging power, rethinking binaries and hierarchies, considering context, embracing pluralism, making labor visible, and elevating emotion. The goal of this work is to transform scholarship into action – to operationalize feminism in order to imagine more ethical and more equitable data practices.BioCatherine D'Ignazio is a scholar, artist/designer and hacker mama who focuses on feminist technology, data literacy and civic engagement. She has run women's health hackathons, designed global news recommendation systems, created talking and tweeting water quality sculptures, and led walking data visualizations to envision the future of sea level rise. Her forthcoming book from MIT Press, Data Feminism, co-authored with Lauren Klein, charts a course for more ethical and empowering data science practices. Her research at the intersection of technology, design & social change has been published in the Journal of Peer Production, the Journal of Community Informatics, and the proceedings of Human Factors in Computing Systems (ACM SIGCHI). In Jan 2020, D'Ignazio will be an assistant professor of Urban Science and Planning in the Department of Urban Studies and Planning at MIT where she is starting the Data + Feminism Lab. Star Room (32-D463)

September 10

Add to Calendar 2019-09-10 13:00:00 2019-09-10 14:00:00 America/New_York Learning Programming at Scale: Code, Data, and Environment Abstract:Modern-day programming is incredibly complex, and people from all sorts of backgrounds are now learning it. It is no longer sufficient just to learn how to code: one must also learn to work effectively with data and with the underlying software environment. In this talk, I will present three systems that I have developed to support learning of code, data, and environment, respectively: 1) Python Tutor is a run-time code visualization and peer tutoring system that has been used by over five million people in over 180 countries to form mental models and to help one another in real time, 2) DS.js uses the web as a nearly-infinite source of motivating real-world data to scaffold data science learning (UIST 2017 Honorable Mention Award). 3) Porta helps experts create technical software tutorials that involve intricate environmental interactions (UIST 2018 Best Paper Award). These systems collectively point toward a future where anyone around the world can gain the skills required to become a productive modern-day programmer.Bio:Philip Guo is an assistant professor of Cognitive Science and an affiliate assistant professor of Computer Science and Engineering at UC San Diego. His research spans human-computer interaction, programming tools, and online learning. He now focuses on building scalable systems that help people learn computer programming and data science. He is the creator of Python Tutor (http://pythontutor.com/), a widely-used code visualization and collaborative learning platform. So far, over five million people in over 180 countries have used it to visualize over 100 million pieces of Python, Java, JavaScript, C, C++, and Ruby code. Philip's research has won Best Paper and Honorable Mention awards at the CHI, UIST, ICSE, and ISSTA conferences, and an NSF CAREER award.Philip received S.B. and M.Eng. degrees in Electrical Engineering and Computer Science from MIT and a Ph.D. in Computer Science from Stanford. His Ph.D. dissertation was one of the first to create programming tools for data scientists. Before becoming a professor, he built online learning tools as a software engineer at Google, a research scientist at edX, and a postdoc at MIT. Philip's website http://pgbovine.net/ contains over 600 articles, videos, and podcast episodes and gets over 750,000 page views per year. 32-G449 (Kiva Room)-Refreshments at 12:45PM

August 06

Add to Calendar 2019-08-06 13:00:00 2019-08-06 14:00:00 America/New_York Designing Intelligent Interactive Systems from an Information-theoretic Perspective In this talk, I explore the notion of information in the human-computer communication process and design intelligent interactive systems using the tools of information theory. Particularly, I propose BIG (Bayesian Information Gain), a framework to quantify the information sent by the user to the computer to express her intention. Two applications, BIGnav for multiscale navigation and BIGFile for hierarchical file retrieval, demonstrate how the computer can play a more active role and work together with the user to achieve shared goals. The third application, Entrain, also shows how the system can shape user experience in the context of collective music making (live demo). My general research interest lies in using computational approaches to design intelligent interactive systems, particularly taking advantage of explicit and implicit information such as user’s intention, attention and semantic activities to empower interaction in artistic and nonartistic settings. 32-D463 (Star)

July 16

Multisensory Experiences: Beyond Audio-Visual Interfaces

Sussex Computer Human Interaction (SCHI ‘sky’) Lab and the Creative Technology Research Group at the School of Engineering and Informatics at the University of Sussex, UK
Add to Calendar 2019-07-16 13:00:00 2019-07-16 14:00:00 America/New_York Multisensory Experiences: Beyond Audio-Visual Interfaces Abstract: Multisensory experiences, that is, experiences that involve more than one of our senses, are part of our everyday life. However, we often tend to take them for granted, at least when our different senses function normally (normal sight functioning) or are corrected-to-normal (using glasses). However, closer inspection to any, even the most mundane experiences, reveals the remarkable sensory world in which we live in. While we have built tools, experiences and computing systems that have played to the human advantages of hearing and sight (e.g., signage, modes of communication, visual and musical arts, theatre, cinema and media), we have long neglected the opportunities around touch, taste, or smell as interface/interaction modalities. Within this talk, I will share my vision for the future of computing and what role touch, taste, and smell can play in it. Speaker Bio: Marianna Obrist is Professor of Multisensory Experiences and Head of the Sussex Computer Human Interaction (SCHI ‘sky’) Lab and the Creative Technology Research Group at the School of Engineering and Informatics at the University of Sussex, UK. Her research focus is on the study of touch, taste, and smell experiences for novel interface design. Before joining Sussex, Marianna was a Marie Curie Fellow at Newcastle University, UK, and prior to this an Assistant Professor at the University of Salzburg, Austria. Marianna is an inaugural member for the ACM Future of Computing Academy, and was selected Young Scientist 2017 and 2018 to attend the World Economic Forum in the People’s Republic of China. Marianna is co-chairing the CHI 2030 task force defining a strategy for the future of the ACM CHI conference. Most recently, Marianna become a Visiting Professor at the Burberry Material Futures Research Group at RCA London and is currently Visiting Professor at the HCI Engineering Group at MIT CSAIL. 32-G449 (Patil / Kiva)

July 11

Add to Calendar 2019-07-11 13:30:00 2019-07-11 14:30:00 America/New_York Context-Aware Online Adaption of Mixed Reality Interfaces ABSTRACTMixed Reality has the potential to transform the way we interact with digital information. By blending virtual and real worlds, it promises a rich set of applications, ranging from manufacturing and architecture to interaction with smart devices, and gaming to name only a few. By their nature, MR interfaces will be context-sensitive: since users are no longer bound to a particular location such systems will need to adapt to a rich variety of environmental conditions (e.g., indoor versus outdoors), external (e.g., current task) and internal states (e.g., current concentration level). This inherent context-awareness does however pose significant challenges for the design of MR systems: Many UI decisions can no longer be taken at design time but need to be made in-situ, depending on the current context. In this talk, I will present a optimization-based approach that automatically adapts MR interfaces based on users’ current context: we estimate users’ cognitive load and use knowledge about their task and environment to modify when, where and how virtual contents are displayed. I will detail this approach, which uses a mix of rule-based decision making and combinatorial optimization, and can be solved efficiently and in real-time. I will embed this work within the larger context of my research which aims at bridging the virtual and physical world, and to allow users to seamlessly transition between the two. BIODavid Lindlbauer is a postdoctoral researcher in the field of Human–Computer Interaction, working at ETH Zurich in the Advanced Interaction Technologies Lab, led by Prof. Otmar Hilliges. He holds a PhD from TU Berlin where he was working with Prof. Marc Alexa in the Computer Graphics group, and interned at Microsoft Research in the Perception & Interaction Group. His research focuses on the intersection of the virtual and the physical world, how the two can be blended and how borders between them can be overcome. David explores ways that allow users to seamlessly transition between different levels of virtuality, and computational tools to make such approaches more usable. He has worked on projects to expand and understand the connection between humans and technology, from dynamic haptic interfaces to 3D eye-tracking. His research has been published in premier venues such as ACM CHI and UIST, and attracted media attention in outlets such Fast Company Design, MIT Technology Review and Shiropen Japan. 32-G449 (Patil / Kiva)

July 09

Add to Calendar 2019-07-09 13:00:00 2019-07-09 14:00:00 America/New_York Computational wave-front manipulation for novel user-interface design Abstract: Our group has been at the forefront of shaping the acoustic wave-fronts to create novel mid-air displays and haptic devices. Our mid-air display is created by trapping tiny objects in the sound-field and manipulating them to create persistence of vision displays. We manipulate the wave-front by computing acoustic holograms that are delivered using phased arrays of speakers. Similarly, mid-air haptics is created by focusing the pressure wave on the palm of the user. Ultrahaptics is our haptic feedback system that uses acoustic radiation pressure to create tactile stimulations in multiple locations of the user’s hand. This feedback is created in mid-air – so users don’t have to touch or hold any device to experience it. Recently, we have begun exploring the design and implementation of reconfigurable acoustic metamaterials that augment phased arrays to create complex sound fields. In my talk, I will present some of our recent projects on these topics.Speaker Bio: Sriram Subramanian is a Professor of Informatics at the University of Sussex (UK) where he leads a research group on designing and implementing novel interactive systems. Specifically, his group looks at engineering wave-fronts to create novel user-interfaces. In 2018, he was named a Royal Academy of Engineering (RAEng) Chair in Emerging Technologies to develop novel acoustic interfaces. Before joining Sussex, he was a Professor of Human-computer Interaction at the University of Bristol (till July 2015) and prior to this a senior scientist at Philips Research Labs in the Netherlands. Sriram is also the co-founder of Ultrahaptics a spin-out company that aims to commercialise the mid-air haptics enabled by his research. In 2018, Ultrahaptics won the Queens award for enterprise and in 2019 they acquired Leap Motion Inc. 32-G449 (Patil / Kiva)

June 19

Add to Calendar 2019-06-19 11:00:00 2019-06-19 12:00:00 America/New_York Metamaterial Devices AbstractDigital fabrication machines such as 3D printers excel at producing arbitrary shapes, such as for decorative objects. Recently, researchers started to engineer not only the outer shape of objects, but also theirinternal microstructure. Such objects, typically based on 3D cell grids, are known as metamaterials. Metamaterials have been shown toincorporate extreme properties such as change in volume, programmable shock-absorbing qualities or locally varying elasticity. Traditionally, metamaterials were understood as materials—I think of them as *devices*. I argue that viewing metamaterials as devices allows us to push the boundaries of metamaterials further. In my research, I propose unifying material and device and develop “metamaterial devices”. Such metamaterial devices can receive input, process the information to produce output.BioAlexandra Ion is a postdoctoral researcher at ETH Zurich, working with Prof. Olga Sorkine-Hornung on computational design tools for complex geometry. She completed her PhD with Prof. Patrick Baudisch at the Hasso Plattner Institute in Germany. Her research and expertise lie at the intersection of human- computer interaction, digital fabrication, deformation mechanics, and material science. Her research focuses on new types of devices, the functionality of which are solely defined by the material's microstructure. Her ‘metamaterial devices’ unify material and device. She investigates interactive computational design tools that assist users in designing the geometry of such intricate cell-structures.Alex’ work is published at top-tier HCI venues (ACM CHI & UIST). Her work received a Best paper honorable mention awards at ACM UIST and CHI, captured the interest of media such as Wired, Dezeen, Fast Company, Gizmodo, etc., and was viewed over 250.000 times on YouTube. Her work was invited for travelling & permanent exhibitions; currently her metamaterial devices are touring through South America, within Germany, and are exhibited at the Ars Electronica Center in Austria. Kiva (32-G449)

May 20

Add to Calendar 2019-05-20 16:00:00 2019-05-20 17:00:00 America/New_York Towards Unified Principles of Interaction AbstractEven though today's computers are used for many different typesof tasks, they still rely on user interfaces designed for officeworkers in the 1980s. Researchers in Human-Computer Interactionhave produced a slew of innovative interaction styles, fromgestural interaction to mixed reality and tangible interfaces, butthey have not replaced traditional GUIs. I argue that we mustdevise fundamental principles of interaction that unify, rather thanseparate, interaction styles in order to support the diversity of uses and users. I describe ongoing work on my ERC advanced grant, ONE, which explores how the concepts of information substrates and interaction instruments create digital environments that users can appropriate and (re)combine at will.BioMichel Beaudouin-Lafon (PhD, Université Paris-Sud)is a Professor of Computer Science, classeexceptionnelle, at Université Paris-Sud and a seniorfellow of Institut Universitaire de France. He hasworked in Human-Computer Interaction for over 30years and is a member of the ACM SIGCHI Academy.His research interests include fundamental aspects ofinteraction, novel interaction techniques, computer-supported cooperative work and engineering ofinteractive systems. He heads the 22M€ Digiscopeproject and is the laureate of an ERC Advanced Grant. Michel was director of LRI, the laboratory for computer science joint between Université Paris-Sud and CNRS (280 faculty, staff, and Ph.D. students), where he now heads the Human-Centered Computing group. He also founded and co-directed two international masters in HCI. He is currently the chair of the department of Computer Science (1300 faculty and full-time researchers) of the newly created Université Paris-Saclay. He was Technical Program Co-chair for ACM CHI 2013, sits on the editorial boards of ACM Books and ACM TOCHI, and has served on many ACM committees. He received the ACM SIGCHI Lifetime Service Award in 2015. 32-449 (Kiva Room)

April 30

Add to Calendar 2019-04-30 13:00:00 2019-04-30 14:00:00 America/New_York Cognitive Enhancement Abstract:While today's pervasive digital devices put the world’s information at our fingertips, they do not help us with some of the cognitive skills that are arguably more important to leading a successful and fulfilling life, such as attention, memory, motivation, creativity, mindful behavior, and emotion regulation. Building upon insights from psychology and neuroscience, the Fluid Interfaces group creates systems and interfaces for cognitive enhancement. Our designs enhance cognitive ability by teaching users to exploit and develop the untapped powers of their minds and by seamlessly supplementing users' natural cognitive abilities. Our solutions are compact and wearable, and are designed for real-world studies and interventions, rather than laboratory settings. Our work is highly interdisciplinary and combines insights and methods from human computer interaction, body sensor technologies, machine learning, brain computer interfaces, psychology, and neuroscience to create new opportunities for studying and intervening in human psychology in-the-wild. Bio:Pattie Maes is a professor in MIT's Program in Media Arts and Sciences. She runs the Media Lab's Fluid Interfaces research group, which aims to radically reinvent the human-machine experience. Coming from a background in artificial intelligence and human-computer interaction, she is particularly interested these days in the topic of cognitive enhancement, or how immersive and wearable systems can actively assist people with memory, attention, learning, decision making, communication, and wellbeing. Maes is the editor of three books, and is an editorial board member and reviewer for numerous professional journals and conferences. She has received several awards: Fast Company named her one of 50 most influential designers (2011); Newsweek picked her as one of the "100 Americans to watch for" in the year 2000; TIME Digital selected her as a member of the “Cyber Elite,” the top 50 technological pioneers of the high-tech world; the World Economic Forum honored her with the title "Global Leader for Tomorrow"; Ars Electronica awarded her the 1995 World Wide Web category prize; and in 2000 she was recognized with the "Lifetime Achievement Award" by the Massachusetts Interactive Media Council. In addition to her academic endeavors, Maes has been an active entrepreneur as co-founder of several venture-backed companies, including Firefly Networks (sold to Microsoft), Open Ratings (sold to Dun & Bradstreet) and Tulip Co (privately held). Prior to joining the Media Lab, Maes was a visiting professor and a research scientist at the MIT Artificial Intelligence Lab. She holds a bachelor's degree in computer science and a PhD in artificial intelligence from the Vrije Universiteit Brussel in Belgium.*Refreshments will be served Kiva Room (32-G449)

April 26

Add to Calendar 2019-04-26 14:00:00 2019-04-26 15:00:00 America/New_York Visualization for People + Systems Abstract:Making sense of large and complex data requires methods that integrate human judgment and domain expertise with modern data processing systems. To meet this challenge, my work combines methods from visualization, data management, human-computer interaction, and programming languages to enable more effective and more scalable methods for interactive data analysis and communication.More specifically, my research investigates automatic reasoning over domain-specific representations of visualization and analysis workflows, in order to produce both improved human-centered designs and system performance optimizations. My work on Vega-Lite provides a high-level declarative language for rapidly creating interactive visualizations. Vega-Lite can serve as a convenient representation for tools that generate visualizations. To create effective designs, these tools must also consider perceptual principles of design. My work on Draco provides a formal model of visual encodings, a knowledge base to reason about visualization design decisions, and methods to learn design rules from experiments. Draco can formally reason over the visualization design space to recommend appropriate designs but its applications go far beyond. Draco makes theoretical design knowledge a shared resource that can be extended, tested, and systematically discussed in the research community. The Falcon and Pangloss systems enable scalable interaction and exploration of large data volumes by making principled trade-offs among people’s latency tolerance, precomputation, and approximation of computations.A recurring strategy across these projects is to leverage an understanding of people’s tasks and capabilities to inform system design and optimization.Bio:Dominik Moritz is a Computer Science PhD candidate at the University of Washington. He works with Jeffrey Heer and Bill Howe in the Interactive Data Lab and the Database Group. Dominik’s research develops scalable interactive systems for visualization and analysis. His systems have won awards at premier academic venues and are available as open source projects with significant adoption by the Python and JavaScript data science communities. 32-G882

April 23

Add to Calendar 2019-04-23 14:00:00 2019-04-23 15:00:00 America/New_York Hybrid Intelligence Systems: Using Interactive Crowdsourcing to Scaffold Robust Intelligent Systems and Organizations Abstract:Intelligent systems hold the potential to enable natural, fluid, and efficient interactions with computational tools, but there is a snag: artificial intelligence (AI) is far from being able to understand (e.g., via natural language or vision) and reason about nuanced, real-world settings in full generality. While machine learning (ML) has had significant success on specific classes of problems, generating the massive, tailored training data sets that are needed to make these algorithms work across domains reliably remains a significant challenge. In this talk, I will show that we can use real-time crowdsourcing workflows to create robust intelligent systems that work in a broad range of interactive settings by scaffolding AI/ML capabilities with human intelligence. These scaffolds can facilitate and accelerate on-the-fly training, and are designed to gracefully progress towards full automation as AI becomes more effective in the coming decades. Further, this strategic combination of human and machine effort allows us to create systems that greatly exceed what either can do alone. I will conclude with a discussion of how the insights gained from designing these hybrid intelligence systems can inform richer human-AI interaction, and even allow us to fundamentally rethink how we approach work and organization at all scales.Bio:Walter S. Lasecki is an Assistant Professor of Computer Science and Engineering at the University of Michigan, Ann Arbor, where he is the founding director of the Center for Hybrid Intelligence Systems, and leads the Crowds+Machines (CROMA) Lab. He also previously co-directed the UM-IBM Sapphire Project center, a 20+ member initiative to advance conversational technologies. He and his students create interactive intelligent systems that are robust enough to be used in real-world settings by combining both human and machine intelligence to form Hybrid Intelligence Systems ("HyIntS") that are able to exceed the capabilities of both humans and machines alone. These systems help people be more productive, and improve access to the world for people with disabilities. Prof. Lasecki received his Ph.D and M.S. from the University of Rochester in 2015 and a B.S. in Computer Science and Mathematics from Virginia Tech in 2010. He has previously held visiting research positions at CMU, Stanford, Microsoft Research, and Google[x].*Refreshments will be served Kiva Room (32-G449)

April 09

Add to Calendar 2019-04-09 13:00:00 2019-04-09 14:00:00 America/New_York Thinking with Visualizations, Fast and Slow Abstract:Your visual system evolved and develops to process the scenes, faces, and objects of the natural world. You then adapt that system to process the artificial world of graphs, maps, and data visualizations. This adaptation can lead alternatively to fast and powerful – or deeply slow and inefficient – visual processing. I’ll use interactive visual tasks to demonstrate the powerful capacity limits that arise when we extract structure and meaning from these artificial displays, which I will argue must occur via a slow serial language-like representation. Understanding these constraints leads to guidelines for display design and instruction techniques, across information dashboards, slide presentations, or STEM Education.Bio:Steven Franconeri is a Professor of Psychology at Northwestern (Weinberg College), with courtesy appointments in Leadership (Kellogg School of Business) and Design (McCormick School of Engineering), and he serves as Director of the Northwestern Cognitive Science Program. His research is on visual thinking, visual communication, decision making, and the psychology of data visualization. Franconeri directs the Visual Thinking Laboratory, where a team of researchers explore how leveraging the visual system - the largest single system in your brain - can help people think, remember, and communicate more efficiently. The laboratory’s basic research questions are inspired by real-world problems, providing perspective for new and existing theories, while producing results that translate directly to science, education, design, and business.Refreshments will be served. 32-G449 (Kiva Room)

March 19

Add to Calendar 2019-03-19 13:00:00 2019-03-19 14:00:00 America/New_York Data Visualization Across Disciplines Abstract: What can help enable both the treatment of heart disease and the discovery of newborn stars? Visualization. Specifically interdisciplinary data visualization, the sharing and co-development of tools and techniques across domains. Visualization is a powerful tool for data exploration and analysis. With data ever-increasing in quantity, having effective visualizations is necessary for knowledge discovery and data insight. In this talk I will share sample results from my own research and experience crossing disciplines and bringing together the knowledge and experts of computer science, astrophysics, radiology and medicine. I will present new visualization techniques and tools inspired by this work for the astronomical and medical communities including Glue, a multi-dimensional linked-data visual exploration tool.Bio: Dr. Michelle Borkin works on the development of novel visualization techniques and tools to enable new insights and discoveries in data. She works across disciplines to bring together computer scientists, doctors, and astronomers to collaborate on new analysis and visualization techniques, and cross-fertilize techniques across disciplines. Her research has resulted in the development of novel computer assisted diagnostics in cardiology, scalable visualization solutions for large network data sets, and novel astrophysical visualization tools and discoveries. Her main research interests include information and scientific visualization, hierarchical and multidimensional data representations, network visualization, visualization cognition, user interface design, human computer interaction (HCI), and evaluation methodologies. Dr. Borkin is an Assistant Professor in the Khoury College of Computer Sciences at Northeastern University. Prior to joining Northeastern, she was a Postdoctoral Research Fellow in Computer Science at the University of British Columbia, as well as Associate in Computer Science at Harvard and Research Fellow at Brigham & Women’s Hospital. She received her Ph.D. in Applied Physics at Harvard’s School of Engineering and Applied Sciences (SEAS) in 2014. She also has an MS in Applied Physics and a BA in Astronomy and Astrophysics & Physics from Harvard University. She was previously a National Science Foundation (NSF) Graduate Research Fellow, a National Defense Science and Engineering Graduate (NDSEG) Fellow, and a TED Fellow.*Snacks will be served 32-G449 (Kiva Room)

March 05

Add to Calendar 2019-03-05 13:00:00 2019-03-05 14:00:00 America/New_York Software Engineers are People Too: Applying Human Centered Approaches to Improve Software Development Abstract: Software engineers might think that human-computer interaction (HCI)is all about improving the interfaces for their target users throughuser studies. However, software engineers are people too, and they usea wide variety of technologies, from programming languages to searchengines to integrated development environments (IDEs). And the fieldof HCI has developed a wide variety of human-centered methods, beyondlab user studies, which have been proven effective for answering manydifferent kinds of questions. In this talk, I will use examples frommy own research to show how HCI methods can be successfully used toimprove the technologies used in the software development process. Forexample, "Contextual Inquiry" (CI) is a field study method thatidentifies actual issues encountered during work, which can guideresearch and development of tools that will address real problems. Wehave used CIs to identify nearly 100 different questions thatdevelopers report they find difficult to answer, which inspired noveltools for reverse-engineering unfamiliar code and for debugging. Weused the HCI techniques of Paper Prototyping and Iterative UsabilityEvaluations to improve our programming tools. Through the techniquesof Formal User Studies, we have validated our designs, and quantifiedthe potential improvements. Current work is directed at improving theusability of APIs, using user-centered methods to create a more secureBlockchain programming language, addressing the needs of data analystswho do exploratory programming, helping programmers organizeinformation found on the web, and helping end-user programmers augmentwhat intelligent agents can do on smartphones.BIO: Brad A. Myers is a Professor in the Human-Computer InteractionInstitute in the School of Computer Science at Carnegie MellonUniversity. He was chosen to receive the ACM SIGCHI LifetimeAchievement Award in Research in 2017, for outstanding fundamental andinfluential research contributions to the study of human-computerinteraction. He is an IEEE Fellow, ACM Fellow, member of the CHIAcademy, and winner of 12 Best Paper type awards and 5 MostInfluential Paper Awards. He is the author or editor of over 500publications, including the books "Creating User Interfaces byDemonstration" and "Languages for Developing User Interfaces," and hehas been on the editorial board of six journals. He has been aconsultant on user interface design and implementation to over 85companies, and regularly teaches courses on user interface design andsoftware. Myers received a PhD in computer science at the Universityof Toronto where he developed the Peridot user interface tool. Hereceived the MS and BSc degrees from the Massachusetts Institute ofTechnology during which time he was a research intern at XeroxPARC. From 1980 until 1983, he worked at PERQ Systems Corporation. Hisresearch interests include user interfaces, programming environments,programming language design, end-user software engineering (EUSE), APIusability, developer experience (DevX or DX), interaction techniques,programming by example, handheld computers, and visual programming. Hebelongs to ACM, SIGCHI, IEEE, and the IEEE Computer Society.*Snacks will be served 32-G449 (Kiva)

February 12

Add to Calendar 2019-02-12 13:00:00 2019-02-12 14:00:00 America/New_York Empowering Algorithms and Algorithm Disillusionment Abstract: Algorithms play a large role by shaping what we see and don’t see online. In this talk I discuss people’s awareness of algorithms in they daily online social life, the power people attribute to algorithms, and how and when people become disillusioned by them. I further discuss two approaches to address control and whether people want control.Bio:Karrie Karahalios is a Professor of Computer Science, a Co-director of the Center for People and Infrastructures at the University of Illinois at Urbana-Champaign, and a Senior Research Scientist at Adobe Research. She completed an S.B. in Electrical Engineering, an M.Eng. in Electrical Engineering and Computer Science, and an S.M. and Ph.D in Media Arts and Sciences at MIT. Her main area of research is Social Computing—more specifically, social network analysis, relationship modeling, social media interface design, social media feed algorithm awareness/literacy, social visualization, group dynamics, speech delay assistive technologies, and tools for speech-delay diagnoses. She has been awarded a Sloan Research Fellowship, a Harvard Berkman Center for Internet and Society Fellowship, a Kavli Fellowship, the A. Richard Newton Breakthrough Research Award, an NSF Early Career Award, and an NCSA Fellowship, among others. *Snacks will be served 32-G449 (Kiva)