July 9-10, 2018
IRCAM, 1 place Igor Stravinsky, 75004 Paris

As digital technology is now pervasive in our everyday life, increasingly complex elements of human behaviour and cognition must be considered in the design of Human-Computer Interaction (HCI). Future uses could increasingly involve human learning and creativity, with technologies enabling personalisation and appropriation. Important questions stem from these emerging issues in HCI, which can be conceived of as collaborating, cooperating, delegating, competing, negotiating or communicating. By bringing together world-renowned researchers in HCI, health, robotics, and the arts, the goal of the HAMAC workshop is thus to provoke, gather and challenge various views on possible paradigms related to Human-Machine Collaboration.

Invited speakers
  • Memo Akten, Goldsmiths College, University of London
  • Michel Beaudouin-Lafon, LRI, Univ Paris-Sud, University Paris-Saclay, Inria
  • Nadia Bianchi-Berthouze, University College London
  • Sylvain Calinon, Idiap Research Institute
  • Sarah Fdili Alaoui, LRI, Univ Paris-Sud, University Paris-Saclay, Inria
  • Jules Françoise, CNRS LIMSI, Univ. Paris-Sud
  • Marco Gillies, Goldsmiths College, University of London
  • Nathanaël Jarrassé, CNRS ISIR, Sorbonne Université
  • Wendy Mackay, LRI, Univ Paris-Sud, University Paris-Saclay, Inria
  • Joanna McGrenere, University of British Columbia
  • Ana Tajadura-Jimenez, University Carlos III de Madrid
Registration and contact

The registration is free (with limited number of seats). Please register before July 2nd, 2018 here:
https://www.eventbrite.fr/e/billets-hamac-workshop-on-human-machine-collaboration-in-embodied-interaction-47213736557

Contact email: hamacworkshop@gmail.com

Organisers

Scientific program
Baptiste Caramiaux, CNRS, Univ Paris-Sud, University Paris-Saclay, Inria
Frédéric Bevilacqua, UMR STMS Ircam – CNRS – Sorbonne Université – Ministère de la Culture

Ircam organisation
Sylvie Benoit, Eric de Gelis, Pascale Bondu, Claire Marquet

Program

Monday 9 July

09:30 Welcome
10:00 Introduction
Baptiste Caramiaux & Frédéric Bevilacqua
10:30 Creating Human-Computer Partnerships
Michel Beaudouin-Lafon & Wendy Mackay
11:00 Designing Personalized User Interfaces as a Human-Computer Partnership
Joanna McGrenere
11:30 Pause
11:45 Discussions
13:00 Lunch
14:30 Robot Learning from few Demonstrations by Exploiting the Structure and Geometry of Data
Sylvain Calinon
15:00 Designing Interactive Auditory Feedback by Demonstration
Jules Françoise
15:30 Interactive Machine Learning for Embodied Interaction
Marco Gillies
16:00 Pause
16:15 Discussions

Tuesday 10 July

09h30 Welcome
10h00 Designing for Movement in Dance and Choreography
Sarah Fdili Alaoui
10h30 Intelligent Machines that Learn: What Do They Know? Do They Know Things?? Let’s Find Out!
Memo Akten
11h00 Pause
11h15 Discussions
12h30 Lunch
14h00 Movement-Based Control of Upper Limb Prostheses: Towards the Decoding of Body Language
Nathanaël Jarrassé
14h30 The Affective Multisensorial Body in a Technology-Mediated World
Nadia Bianchi-Berthouze & Ana Tajadura-Jimenez
15h30 Pause
15h45 Discussions
17h00 Closing remarks
Introduction

Introduction by the organizers: Baptiste Caramiaux (CNRS), Frédéric Bevilacqua (IRCAM)

→ slides (pdf)

 

Titles, abstracts and bios

Intelligent Machines that Learn: What Do They Know? Do They Know Things?? Let’s Find Out!
Memo Akten, Goldsmiths College, University of London

As computers and software become ‘smarter’, more autonomous and ubiquitous, how does this impact human creativity, and the role of the artist? In this talk, I’ll briefly cover some of my own meanderings in this area, particularly within the context of the recent developments in deep learning. This includes explorations in i) real-time, interactive computational systems for artistic, creative expression, ii) intelligent systems for human-machine collaborative creativity, and iii) investigating how we make sense of the world, and project meaning onto noise.

Memo Akten is an artist working with computation as a medium, exploring the collisions between nature, science, technology, ethics, ritual, tradition and religion. Combining critical and conceptual approaches with investigations into form, movement and sound, he creates data dramatizations of natural and anthropogenic processes. Alongside his practice, he is currently working towards a PhD at Goldsmiths University of London in artificial intelligence and expressive human-machine interaction. His work has been shown and performed internationally, featured in books and academic papers; and in 2013 Akten received the Prix Ars Electronica Golden Nica for his collaboration with Quayola, ‘Forms’.

 

Robot learning from few demonstrations by exploiting the structure and geometry of data
Sylvain Calinon, Idiap Research Institute

Many human-centered robot applications would benefit from the development of robots that could acquire new movements and skills from human demonstration, and that could reproduce these movements in new situations. From a machine learning perspective, the challenge is to acquire skills from only few interactions with strong generalization demands. It requires the development of intuitive active learning interfaces to acquire meaningful demonstrations, the development of models that can exploit the structure and geometry of the acquired data in an efficient way, and the development of adaptive controllers that can exploit the learned task variations and coordination patterns. The developed models need to serve several purposes (recognition, prediction, generation), and be compatible with different learning strategies (imitation, emulation, exploration).
I will present an approach combining model predictive control, statistical learning and differential geometry to pursue such goal. I will illustrate the proposed approach with various applications, including robots that are close to us (human-robot collaboration, robot for dressing assistance), part of us (prosthetic hand control from tactile array data), or far from us (teleoperation of bimanual robot in deep water).

→ slides (pdf)

Dr Sylvain Calinon is a Senior Researcher at the Idiap Research Institute (http://idiap.ch). He is also a lecturer at the Ecole Polytechnique Fédérale de Lausanne (EPFL), and an external collaborator at the Department of Advanced Robotics (ADVR), Italian Institute of Technology (IIT). From 2009 to 2014, he was a Team Leader at ADVR, IIT. From 2007 to 2009, he was a Postdoc at the Learning Algorithms and Systems Laboratory, EPFL, where he obtained his PhD in 2007. He is the author of 100+ publications at the crossroad of robot learning, adaptive control and human-robot interaction, with recognition including Best Paper Awards in the journal of Intelligent Service Robotics (2017) and at IEEE Ro-Man’2007, as well as Best Paper Award Finalist at ICRA’2016, ICIRA’2015, IROS’2013 and Humanoids’2009. He currently serves as Associate Editor in IEEE Transactions on Robotics (T-RO), IEEE Robotics and Automation Letters (RA-L), Intelligent Service Robotics (Springer), and Frontiers in Robotics and AI.

 

The affective multisensorial body in a technology-mediated world
Nadia Bianchi-Berthouze, University College London
Ana Tajadura-Jimenez, University Carlos III de Madrid

Body movement is an important modality in the affective life of people. With the emergence of full-body sensing technology come new opportunities to support people’s affective experiences and needs. Although we are now able to track people’s body movements almost ubiquitously through a variety of low-cost sensors embedded in our environment as well as in our accessories and clothes, the information garnered is typically used for activity tracking more than for recognising and modulating affect. In her talk, Nadia will highlight how we express affect through our bodies in everyday activities and how technology can be designed to read those expressions and even to modulate them. She will present her group’s work on technology for chronic pain management and discuss how such technology can lead to more effective physical rehabilitation through integrating it in everyday activities and supporting people at both physical and affective levels. She will also discuss how this sensing technology enables us to go beyond simply measuring and reflecting on one’s behaviour by exploiting embodied bottom-up mechanisms that enhance the perception of one’s body and its capabilities. Continuing on the latter aspect, Ana will talk about how neuroscientifically grounded insights that body perceptions are continuously updated through sensorimotor information may contribute to the design of new, enhanced body-centred technologies. Ana will present their work on how sound feedback on one’s actions can be used to alter body perception, and in turn enhance positive emotions and change motor behavior. She will also present their current project aimed to inform the design of wearable technology in which sound-driven changes in body perception may be used to enhance behavioral patterns, confidence and motivation for physical activity. We will discuss how apart from the focus on real-life applications, novel technologies and algorithms for body sensing and sensory feedback may also become a research tool for investigating how emotional and multisensory processes shape body perception. We will conclude by identifying new challenges and opportunities that this line of work presents.

Nadia Bianchi-Berthouze is a Full Professor in Affective Computing and Interaction at the University College London Interaction Centre (UCLIC). Her research focuses on designing technology that can sense the affective state of its users and use that information to tailor the interaction process. She has pioneered the field of Affective Computing by investigating how body movement and touch behaviour can be used as means to recognize and measure the quality of the user experience. She also studied how full-body technology and body sensory feedback can be used to modulate people’s perception of themselves and of their capabilities to improve self-efficacy and coping capabilities. Her work has been motivated by real-world applications such as physical rehabilitation (EPSRC Emo&Pain), textile design (EPSRC Digital Sensoria), education (H2020 WeDraw) and wellbeing on the industrial workfloor (H2020 Human Manufacturing). She has published more than 200 papers in Affective Computing, HCI, and Pattern Recognition.

Ana Tajadura-Jiménez is a Ramon y Cajal research fellow at Universidad Carlos III de Madrid (UC3M) and honorary research associate at the University College London Interaction Centre (UCLIC). Her research focuses on understanding how sound-based interaction technologies could be used to alter people’s perceptions of their own body and the surrounding space, as well as their emotional state and their motor behaviour patterns. This research is empirical and multidisciplinary, combining perspectives of psychoacoustics, neuroscience and HCI. She is currently Principal Investigator of the MagicShoes project, which aim is to make people feel better about their bodies and sustain active lifestyles. Prior to this she obtained a PhD in Applied Acoustics at Chalmers University of Technology (Sweden). She was a post-doctoral researcher in the Lab of Action and Body at Royal Holloway, University of London, and an ESRC Future Research Leader and Principal Investigator of The Hearing Body project at University College London (UCL). Ana is most passionate about wearable technology for emotional and physical well-being.

 

Designing for movement in dance and choreography
Sarah Fdili Alaoui, LRI, Univ Paris-Sud, University Paris-Saclay, Inria

Human movement has historically been approached as a functional component of interaction within Human Computer Interaction (HCI). This design approach reflects the task-oriented focus of early HCI research, which was preoccupied with ergonomics and efficiency. Yet movement is not solely functional, it is also highly experiential, expressive and creative.
While human movement is ubiquitously present in all forms of technology interaction, movement expertise is often absent in the design of technology. In my work, I apply investigate movement in dance and choreography and in movement practices such as Laban Movement Analysis to design technologies for the experiential body within digital art and dance. Moreover, I integrate notions of movement from my practice of dance and choreography such as the movement qualities, as interaction modalities in an attempt to meet human needs as a whole and to encourage movement exploration, curiosity and reflection.
From a methodological perspective, I consider first person methodologies that research experience and support both the designer’s and the user’s expression. I use research through practice, ethnographic or phenomenological methods instead of those advocating abstract notions such efficiency, accuracy and usability.
Finally, from HCI theories of substrates, instrumental and embodied interaction, I design interactive systems that support user agency, system guidance and novelty in the choreographic process. The goal is to design a technology that encourages movement crafting and highlights choreographic patterns that support users’ creative choices and strategies.

Sarah Fdili Alaoui is an assistant professor in interaction design and interactive arts at LRI-Université Paris-Sud. She is a dance artist, choreographer and Laban Movement Analyst. Before her current position, she was a researcher at the School of Interactive Arts+Technology at Simon Fraser University in Vancouver. She holds a PhD in Art and Science from University Paris-Sud 11 and the IRCAM-Centre Pompidou research institute. She holds a MSc from University Joseph Fourier and an Engineering Degree from ENSIMAG in Applied Mathematics and Computer Science and has over 20 years of training in ballet and contemporary dance. Sarah is interested in bridging scientific and experiential research in the movement based arts to radically alter and affect our understanding of movement, human knowledge and cognition. She brings dance and technologies together, collaborating with dancers, visual artists, computer scientists and designers to create interactive dance performances, interactive installations and tools for supporting choreography.

 

Designing Interactive Auditory Feedback by Demonstration
Jules Françoise, CNRS LIMSI, Univ. Paris-Sud

Technologies for sensing movement are expanding toward everyday use in virtual reality, gaming, and artistic practices. In this context, there is a need for methodologies to help designers and users create meaningful movement experiences. This presentation will discuss a user-centered approach for the design of interactive auditory feedback. This method uses interactive machine learning to build the motion-sound mapping from user demonstrations of movements associated with sound examples. Importantly, it emphasises an iterative design process that integrates acted and interactive experiences of the relationships between movement and sound. This presentation will outline the overall methodology, discuss some relevant probabilistic models for movement modelling, and present several concrete applications for the design of auditory feedback in artistic contexts.

→ slides (pdf)

Jules Françoise is a CNRS researcher at LIMSI (Laboratoire d’Informatique pour la Mécanique et les Sciences de l’Ingénieur), interested in Movement-based Human-Computer Interaction. He holds a PhD in computer science from Université Pierre et Marie Curie, that he completed at Ircam-Centre Pompidou within the {Sound Music Movement} Interaction team. He is a steering committee member of the MOCO conference (International Conference on Movement and Computing).

 
 

Interactive Machine Learning for Embodied Interaction
Marco Gillies, Goldsmiths College, University of London

Interaction based on human movement has the potential to become an important new paradigm of human computer interaction, but for it to become mainstream there need to be effective tools and techniques to support designers. A promising approach to movement interaction design is Interactive Machine Learning, in which designing is done by physically performing . This talk will bring together many different perspectives on understand human movement knowledge and movement interaction. This understanding shows that the embodied knowledge involved in movement interaction is very different from the representational knowledge involved in a traditional interface so a very different approach to design is needed. It will apply this knowledge to understanding why interactive machine learning is an effective tool for motion interaction designers and to make a number of suggestions for future development of the technique.

Marco Gillies is a Reader in Computing at Goldsmiths, University of London, where he co-founded the Creative Computing degree and helped pioneer and interdisciplinary approach to computing which takes coding seriously as a creative practice. His research touches on many topics including Virtual Reality, social interaction analysis, embodied interaction, human-centred machine learning and educational technology, but is unified by an interest in how technology can make use of the tacit and embodied in human behaviour.

 
 

Movement-based control of upper limb prostheses: towards the decoding of body language
Nathanaël Jarrassé, CNRS ISIR, Sorbonne Université

The main research trend in the field of prosthetic control is now the decoding of subject’s motor intentions from the peripheral or the central nervous system through machine learning and more invasive techniques. However, these approaches suffer from numerous problems ranging from signal measurement, robustness, complexity of invasive techniques (questioning the benefit/risk balance) and above all the difficulty for humans to precisely control these physiological activities (myoelectric or cerebral activity) which require heavy training and attention from users. Indeed, while human subjects can easily control (and feel, through numerous sensory modalities) their movements, they can hardly control (and feel) their muscle activity or their “cerebral states” in a precise way. In this presentation, I will thus introduce the alternatives approaches that we are developing at ISIR, which rather exploit the measurement of amputated subject’s body movements and the fundamental knowledge on motor control (synergies, redundancy and motor compensatory strategies) to offer more intuitive, natural and efficient control strategies to arm prosthesis users.

Nathanaël Jarrassé received an M.Sc. in Industrial Systems Engineering from Arts et Métiers ParisTech, and an M.Sc. and a Ph.D. in Robotics from UPMC, Paris. He has been a postdoctoral Research Associate at the HRG, Department of Bioengineering of Imperial College London, and is now a Tenured Researcher for the National Center for Scientific Research (CNRS) at ISIR, Sorbonne Université. His research focuses on physical human-robot interaction for medical applications, and aims at developing robotic interactive systems (prostheses, exoskeletons, instrumented objects) to study and characterize the human sensorimotor system, and to improve assistance and rehabilitation of individuals affected by motor skill loss.

 

Creating Human-Computer Partnerships
Wendy Mackay, LRI, Univ Paris-Sud, University Paris-Saclay, Inria
Michel Beaudouin-Lafon, LRI, Univ Paris-Sud, University Paris-Saclay, Inria

The classic approach to Artificial Intelligence treats the human being as a cog in the computer’s process — the so-called “human-in-the-loop”. By contrast, the classic approach to Human-Computer Interaction seeks to create a ‘user experience’ with the computer. We seek a third approach, a true human-computer partnership that takes advantage of machine learning, but leaves the user in control. I describe how we can create interactive systems that are discoverable, appropriable and expressive, drawing from the principles of instrumental interaction and reciprocal co-adaptation. Our goal is to create robust interactive systems that grow with the user, with a focus on augmenting human capabilities.

→ slides (pdf)

Ressources associated with slides:
– Slide 28 – Octopocus: https://vimeo.com/2116172
– Slide 35 – CommandBoard: https://www.youtube.com/watch?v=HNdI9EmxAvc
– Slide 43 – Fieldward: https://www.youtube.com/watch?v=F-Z8uj6GCSY
– Slide 51 – Expressive keyboard: https://www.youtube.com/watch?v=iROqaskPPYU

Wendy Mackay is a Research Director, Classe Exceptionnelle, at Inria, France, where she heads the ExSitu (Extreme Situated Interaction) research group in Human-Computer Interaction at the Université Paris-Sud. After receiving her Ph.D. from MIT, she managed research groups at Digital Equipment and Xerox EuroPARC, which were among the first to explore interactive video and tangible computing. She has been a visiting professor at University of Aarhus and Stanford University and recently served as Vice President for Research at the University of Paris-Sud. Wendy is a member of the ACM CHI academy, is a past chair of ACM/SIGCHI, chaired CHI’13 and recently received the ACM/SIGCHI Lifetime Achievement Service Award. She also received the prestigious ERC Advanced Grant for her research on co-adaptive instruments. She has published over 150 peer-reviewed research articles in the area of Human-computer Interaction. Her current research interests include participatory design, creativity, co-adaptive instruments, mixed reality and interactive paper, and multidisciplinary research methods.

Michel Beaudouin-Lafon (PhD, Université Paris-Sud) is a Professor of Computer Science, classe exceptionnelle, at Université Paris-Sud and a senior fellow of Institut Universitaire de France. He has published over 170 papers and is a member of the ACM SIGCHI Academy. His research interests include fundamental aspects of interaction, novel interaction techniques, computer-supported cooperative work and engineering of interactive systems. He is the laureate of an ERC Advanced Grant for his work on instrumental interaction and information substrates. Michel was director of LRI, the laboratory for computer science joint between Université Paris-Sud and CNRS (280 faculty, staff, and Ph.D. students), and now heads the Human-Centered Computing lab at LRI. He was Technical Program Co-chair for CHI 2013, sits on the editorial boards of ACM Books and ACM TOCHI, and has served on many ACM committees. He received the ACM SIGCHI Lifetime Service Award in 2015.

 

Designing Personalized User Interfaces as a Human-Computer Partnership
Joanna McGrenere, University of British Columbia

There is no such thing as an average user. Users bring their own individual needs, desires, and skills to their everyday use of interactive technologies. Yet many of today’s technologies, from desktop applications to mobile devices and apps, are still designed for some mythical average user. It seems intuitive that interfaces should be designed with adaptation in mind so that they would better accommodate individual differences among users. Yet, what seems intuitive is not necessarily straightforward. I will highlight examples from my group’s research in the area of personalized user interfaces. The focus will be on various approaches to adaptation and what we’ve learned about the strengths and limitations of those approaches. I will argue that the most promising future opportunities lie with a human-computer partnership model, but that such a model is challenging to design.

→ slides (pdf)

Joanna McGrenere is a Professor in Computer Science at the University of British Columbia (UBC) and is an Inria & Université Paris Sud International Research Chair. She earned her PhD from the University of Toronto, after completing her MSc at UBC. Joanna specializes in Human-Computer Interaction research, with a focus on designing personalized user interfaces, computer supported cooperative work, as well as on developing interactive systems for diverse user populations, including older adults and people with impairments. She is currently serving as the Program Co-Chair for ACM ASSETS 2018 and will be the overall Technical Program Co-Chair for ACM CHI 2020. She is a member of the editorial board for ACM Transactions on Computer-Human-Interaction (ToCHI) and ACM Transactions on Accessible Computing (TACCESS).