Cars, Computing and the Future of Work: A UW & MSR Workshop: Welcome and Overview

Cars, Computing and the Future of Work: A UW & MSR Workshop: Welcome and Overview

Articles, Blog , , , 0 Comments


>>So I think we’re
ready to get started. I guess I just want to welcome
everybody to this session on cars, computing, and the future of work. This is a collaboration between
Microsoft and our U-Dub team, which actually includes
more than the U-Dub, but it includes Harvard, Wellesley, University
of New Hampshire, as well as the University
of Wisconsin-Madison, and we’ll be introducing
everybody in a second. I’m going to actually have Ed, give some logistics first and then me and Eric we’ll get
started on the program>>Excellent. Well, thank you
all very much for coming today. We’re extremely excited
to have you here. We have a great program with lots of new research, slides,
video, everything. But the most important part
of today is going to be the discussion and
collaboration that we have. So we’re very excited to have you working on the papers on your tables. We’re going to be brainstorming
together and we’re fortunate to have some good friends
from Urban Wild Studios. We’re going to do graphic
note-taking throughout the day. So this is going to be very dynamic. We will ask because
we’re recording it and we have a couple of
collaborators on the phone. If you have questions,
we’ll bring you a mic. I think it’s going to
be a wonderful day. You know where the coffee is, so please caffeinate accordingly. I think we’ll just get started.>>Okay. Great. Thank
you very much, Ed. Okay. So just want to give everybody just overview
of what the agenda is. So we’re going to start
off in the morning with just welcomes and introduction. So we’ll do all introductions
about everybody in just a second. We’re going to have
an overview of four topics, basically, our ongoing projects. This is going to include
Shamsi and Eric, Andrew, Orit, and Raffaella who is on
the phone as well as as Satish. Then what we’d ask you to
do while they’re talking, so the talk is only going to be
10 minutes and I’m going to be a major downer by,
basically, clocking people. But what we really want
to do is make sure we open it up for discussions
for everybody. So we’re going to give you
10 minutes for discussions. What we’d like you to do is as you’re asking questions and
thinking about ideas, you have post-its and
notepads on the table, as well as pens. What we’d like to do is
just write out your ideas. Then as you write up your ideas, you’ll see that there’s actually
white papers along the board. Go ahead and pick it up
and stick it on there. Then John Lee is going to be
our runner for the morning and he’s going go around and
take a look at what those ideas are and then
try to consolidate it. So that we have a 20-minute break. During that 20-minute break, we’re going to actually go
through and try to get you guys into groups right
after the break. So we’re going to put you
into four groups and, basically, discuss some of
the topics that we went over. Then right after that, we will
have a follow-up and then lunch. Then the afternoon, we’re
going to do it again. But instead of talking
about ongoing projects, we’re going talk more about
just theoretically like where should the future work be in cars
and what we should be going. So we’ll have four additional
talks in the afternoon. Then again, we’re going to ask
you to do the same exercise, then we’ll have a break. Then we’re going to, again, have the discussions and then wrap up and tell you
what our next steps are. So hopefully, that would be
something that you’ll enjoy doing. So what we’d like to do is just
get to know who everybody is. So we wanted to be able
to do introductions, because many of us are academics
and we tend to talk really long. I’m trying to keep you to
something really short, which is I’m going to ask you this, number one, say what your name
is, where you’re from. Then in the future, what you hope to do while you’re
traveling in the cars. So for example, I’ll get started. Hi. My name is Linda Boyle. I’m from the University
of Washington. In the future, I hope
to be able to not be motion sickness while I work
on a lot of stuff in my car. Eric, would you like to go next
and then we’ll go around the room?>>Yes. No need of microphone,
we shouldn’t use a microphone.>>We won’t get the recording
without the mic.>>Oh, pass the mic around me? Okay, got it. Okay.
Then I’ll use a mic. I want to just be very fair. My name is Eric Horvitz. I am from Kirkland, Washington. In the future, I hope
to be able to minimize distractions and drive
more safely on all fronts.>>Then we’ll start passing
around the mic and be expedient.>>Good morning.
My name is Ed Durham. I’m from Microsoft Research. In the future, I hope to talk to my digital assistant
while I’m in the car.>>Hi. I’m Andrew Kun. I’m from the University
of New Hampshire. In the future, I’d like
to be able to listen to audio books and annotate
them while I’m in the car.>>Hi. I’m [inaudible]
from Wellesley College. In the future, I would
like to effectively rehearse my PowerPoint presentation
as I’m going to this workshop.>>Hi. I’m Duncan Brumby. I’m from University College London. In the future, I’m
just going to say it, go for a nap while driving.>>Hi. Shamsi Thol. I’m from Microsoft Research. In the future, I hope to be able to capture my thoughts while
traveling in my car. I also suffer from motion sickness.>>Hi. I’m Wendy Cafer. I’m with MSR. In the future, I’d like to have
an intelligent conversation with my car, in my car.>>I’m Hanan Savet with the
University of Maryland. In the future, I’d like to
safely doze off in my car, just as I do on the train
and on the plane.>>Hi. I’m Louis Lamb from Brazil, Federal University of
Rio Grande do Sul, which is in the south of the country. In the future, I hope to feel safer when I sleep while
traveling in my car.>>Hello, everyone. My name
is Jacob Tchaikovsky. I’m with the University of Alabama. Actually, I don’t like driving. I bike almost everywhere. So I hope to be able to continue
doing that in the future.>>Hi. My name is [inaudible]
from Ann Arbor, Michigan. In the future, I hope
to spend more time with my family while traveling in
the car which we do a lot.>>Please.>>Hi. My name is Jesse Yang. I’m from the University
of Michigan, Ann Arbor as well. In the future, I hope
to sing karaoke in my car without being
asked to take over.>>My name is Jemba Mai. I’m with Microsoft’s
Business Development team. In the future, I hope not to
drive while traveling in my car.>>Hi. My name is Maher. I’m with the University of Washington and
I’m [inaudible] as an intern. In the future, I want to finish
writing my kite papers on the car.>>My name is Gang Lo. I’m
from Harvard Medical School. In the future, I hope
if I fall into sleep, I’d safe when I’m
traveling in my car.>>Hi. Good morning. I’m Satish. I’m from MSR Bangalore in India. If any of you have been in India, you know how driving there is. So in the future
[inaudible] biker too and I hope people driving in cars
pay more respect to bikers.>>Hi. I’m Alice Fern. I’m originally from the University of
Arizona College of Medicine. But now, in Microsoft Office. In the future, I hope
to capture my thoughts while minimally working
while traveling in the car.>>Good morning. My name is Mickey
Vurano from Microsoft Research. In the future, I hope to be less startled by drivers who
push me off the road, or cutoff, or scare me in other many ways while
traveling in my car.>>Yeah. My name is Jim Pinkelman. I’m from Microsoft Research Outreach. In the future, I’d like to
do anything but worry about traffic or driving much like
a train or a plane sleep, learn, do anything, but worry
about the stress of traffic.>>Hi. I’m Yuan Tian from
the University of Virginia. So in the future, I also hope
to do everything in the car, but without worrying about my
privacy and safety in the future.>>I’m John Lee from the
University of Wisconsin. I hope to practice mindfulness
as I drive or not drive.>>I see it’s the recurring theme.
I’m Travis Wilson. I’m from Microsoft’s Speech
and Language team. I’m going to actually
second the very boring, the very appealing
thing, I would love to sleep while traveling in my car.>>My name is Ivan Tafisha. I’m from Microsoft Research
and I hope to be willing [inaudible]
entertainment while I’m driving.>>Don Norman from University
of California, San Diego. There are two futures
the near future. I hope to stop the car companies and some of the people
in this room who have made very dangerous suggestions about what drivers should be
doing while also driving. Because I want to be able to feel safe and not just from my own car, but from other cars
that might hit me. In the very far future, I hope for driverless cars, where there are
no controls, whatsoever, and so we can do whatever
we feel like doing. Unfortunately, I’ll be dead
by the time that happens.>>I’m Mark Sabracks. I’m from the National
Science Foundation and the Catholic University of
America in Washington DC. In the future, I hope to repetitively do a whole variety of these tasks that
people are mentioning, because I won’t have to attend to
the key aspects of the driving.>>My name is [inaudible]. I’m from the University
of Munich in Germany. I hope, in the future, I drive less and I travel
less in the cars, so. While I’m in the car,
I want to be safe.>>Excellent.>>My name is Jill. I’m from the Chicago branch of
Urban Wild Studio. I’d like to be able to look around at the scenery while
I’m driving in my car.>>Hi. My name is Leah. I am from the Portland branch
of Urban Wild Studio. I would like to feel more refreshed rather than drained
after a long car ride.>>Very good.>>Okay. Last but not least, we have Raffaella online. Raffaella, can you hear us?>>Yeah, sure. My name
is Raffaella Sadun from Harvard Business School.
I’m also Italian. So I hope that in the future, I will be happy, relaxed, and productive, and not be
taken by road rage in the car.>>Yeah, very good. Excellent. Okay, great. Thank you very much, everybody. So this is what
our workshop goals are. We are hoping that we’re
going to be able to share some really amazing ideas, and look for challenges and
opportunities where we can actually then come together to try to find some solutions so that
we can work together. So our goal is to, basically, identify what these potential
collaborations are and that maybe look for opportunities
where we can work on proposals, projects, research grants, maybe
even co-authorships of papers. So what we’re hoping to do
later on is we’re going to have a matchmaking exercise
so that we can match you to the right kind
of research topics, as well as the people
that can move forward with some of the topics or areas that you’d be interested
in working on. With that, I’m going to move the mic over to Eric so that he can- To discuss some of
the future of work topics.>>It just as a little bit
of background. Maybe three years ago, we hosted National Academy of Engineering
regional meeting at Microsoft, and that’s where I met
Linda who was speaking about what happens to
people and their abilities to drive after they have experienced long periods of automated cruise
control, dynamic cruise control. My talk was on
human-AI collaboration and mixed up designs from
mixes of initiative, and we just kick started
some discussions given to interests. In particular, around my early
experiences with Tesla Autopilot. I’m an avid user of it, and I find out that I
almost experiment too much. If you want to read about
some of my experiences, search on Google or Bing my last
name and Tesla and read about my accident in Wired magazine in some comments about that
which is very unfortunate. But I think today, and this is just up we just
put up a six bullets here of some topics but this
shouldn’t be constraining. It should be more like framing and the starting point is to
start flexing the organizers. You want to think about what it means to support work
in mobile settings, and in advance, I’d say if possible at all then this predominance
comment of full automation. What does it mean to
do work or to think or to even listen to e-mail, for example, in a car or
have meetings, for example? This brings up the topic
very quickly of human cognition and the availability of cognition to do different things. If you believe in
Multiple Resource Theory of Mind, what happens when you attend
to something while trying to drive and surprising situations
come up, for example. It might seem quite safe. But when you look at tens of
millions of miles or driving, even slight changes to
the probabilities of rolling the dice leads to large-scale
life-changing events and crashes and disabling accidents that affect
people for life and changing statistics of that
based on designs for how Audio Feeds work or
people checking their device, or you’re looking at how to control the Instrument Dashboard in the car are very interesting
and this one too, we’ll hear later from Shamsi into understanding psychology and cognitive psychology and
new studies that are adapted to these situations
that give us insights and designs and the interaction of design with what we understand
about human cognition. On safety risks here, I want to break this out
and just to highlight this because all of us
in this room have been horrified looking to
our right and seeing somebody on the highway or on
surface streets looking at a device, and then we realized that’s really us because
we do the same thing. So the situation is unbearable yet we’re all on the
slippery slope it seems, and that slope has been found per several studies
to be one of the factors in the bump going up after seeing highway fatalities in
US dip for a number of years now starts swinging up. What’s been called out is yes, safety is getting better
with the cars and the machine itself and
survivability of cars, but we’re seeing an increased
number of accidents, non-fatal and fatal because of the obvious distribution of
people who are looking at their device for some segment
during their driving time. So the question with this is okay so, is there something we can do about
designs short of legislating, most places it is legislation, but short of enforcing laws about not looking at
devices while traveling. What can we do if we understand
the desires and goals, and what people can do on the
roads today to make it better to reduce that upswing
in safety challenges? What are the directions
this isn’t designed and limiting what can be done
and where, for example. One other comment here about this is, John Krumm at MSR and I two years
ago, probably, three years ago, the AAA published a
paper where we took a large-scale data
set from Minneapolis, Minnesota of accidents
by time of day and day, a week, and what properties, where the sun was in the air, I mean, in the sky
and zenith azimuth, and we basically showed how you
can run a risk sensitive planner to provide safest routes to travel on versus the shortest
or the fastest. The idea of being able to understand cognition and work possibilities and designs enough to fold
that with certain roads, like taking the highway
with less turns and no pedestrians for certain work
and folding these new techniques, for example, safety maximization
on roads with cognitive needs, this is one example of that. At Microsoft, we’re actually
looking at the idea of car rides as productive
times for meetings, and it’s not just us. Large car companies
are thinking about the designing cars even if driven by somebody
else as meeting spaces. We’ve had some an interesting
reflections about the prospect of some day when
you go to Outlook or Exchange, and you’re going somewhere looking at pending,
meeting requirements, and even jumping on Satya Nadella’s
car right to the airport because there’s a seat there
available for a meeting and you can take your own car back
or be jetted and back. But how can we take
all that time and use smart scheduling to get people
together for conversations? I thinks, it one of
many directions there. Then thinking about on the design
side and what cars can do, how do we look at the
road map for automation? As Don Norman said, there could be
a breakthrough in biology. So we might not be dead when
we have full automation. But on the other side, it is tough, and I can tell you
right now even when I trust my Tesla of the most, after some thousands of miles, there’s a major surprise
even when I thought I knew to be relatively deterministic
behavior given road properties.>>[inaudible]>>Exactly. Again, back to that Wired Magazine article,
think about that. So the question is, if we have a road map on automation, could we even give this community, give guidance to people developing
increasingly powerful versions of semi-automated driving to understand given
these kinds of needs we have and create where
we’re going to work? How can we do that more safely? First, as a simple example, I’d like to know if
I’m doing anything in the car on autopilot or not that I’m coming up to a high-risk safety
point like a pedestrian crosswalk, and I’ve had some near misses
even there on autopilot that just happened to be in
the wrong way and trusting my car, and the car doesn’t know
about people or crosswalks. But that’s known by maps and the idea of giving
understanding why people had trouble with hot spots. Finally, what does all
this mean for policies and legislation to come up
with the right set of methods? For example, might
turn out that back to, someone said they were
horrified seeing some of these people on cars on automation, maybe my car lights up and says, “This car is given by an attention
constrained driver, stay-away.” Pedestrians can see this car
going the road near crosswalk, a big pink light flashing, it’s, “Get out of the way because I’m not sure if this person is
attending to the road.” You can imagine all the policies
that this would design. So I’ll stop there. Just a few ideas, and we’ll be hearing
more about this later.>>Yes. Thank you very much. I think I’m good. All right. So just really quickly, and then we’re going to
get to our next speaker. I’m hoping that we can create
a matrix such as this where we put together what collaboration
opportunities there are. We just made up some topics for now. But we’re hoping that
as the day unfolds, that we’ll cement
what certain topics might actually be to basically look at
ways that we can say like, we want to do some collaborations. We might have some data collections. We may also even have
resources that are available. There might be students that
may want to work at MSR. There may be some additional data or data collection opportunities. We’re going to hear later
about a survey that we’ve been collecting information on that
we would love to get feedback. There may be some
part-time appointments. With that, I’m going to, great.>>What do you want us
to do with the notes?>>Turn them round.
That’s a great question. So Don Norman just asked what
we’re going to do with the notes. So what we’re going to
do is we’re going to have each individual person
put it up on the white.>>[inaudible].>>So what’s going to
happen is we’re going to put it up there just like, what was your name, I’m
sorry, I know you but?>>Mark.>>Mark? What Mark is going
to do is, see what he did? Gold star, Mark, and Don, gold star. So you’re just going put it up there, and I’m glad that we have people
that are initiating already, and then John’s going to
walk around and look at them so that he can go ahead
and start moving them around. So with that, I’m going to go have Shamsi come up to give her talk. Eric, did you want to just
do a quick introduction on Shamsi’s talk? You’re
going to pop this up.>>Originally, we were going
to do this together because Shamsi and I worked so closely on projects that booted
up our deep dive into both Simulator that led
to a set of studies. Then later on, with interns, and Shamsi continued for a number of years and has
been working on to this day. The basic idea is what
do we understand about results from cognitive
psychology when it comes to memory,
attention, judgments. The ability to do mechanical tasks
in divided attention, and dual task, or
triple task scenarios. There’s been a rich
literature in this space. So starting point is
going to the literature, but then we’ll actually you want
to put people in simulators design studies that specifically
address various kinds of what you might think to be
aspirational work tasks in cars to understand limitations
and where that might be availability that
wouldn’t be so threatening. I had some experiences again, driving and visualizing
where I thought, “Is it possible that
we know enough about the mind that when you visualize
and sticking visually, that sucks away attention from your visual attention on
the actual live scene?” It turns out that those questions
frames which are deep dives, and I thought the results
were surprising. We could maybe get in to them later
in some of our early studies, but it really is
shocking and surprising, and interesting to see
what really happens in the safe rooms of simulators when people are
trying to do two things, either it would be
conversation or a recall task, or even more intensive tasks.
Shamsi, turn it over to you.>>Thank you, awesome. Okay.
So Shamsi has 10 minutes, and I will like to see some of you start picking up some
of those Post-its, and start working on them. If you need a seat, we have plenty
of seats on this table too. So go ahead, Shamsi.>>Okay. Can people hear me?>>Yes.>>Awesome, because I wasn’t
sure that this was working. I am Shamsi Iqbal. I am a researcher in Microsoft
Research AI and as Eric mentioned, I have been working with Eric
in this field for a while. I started as a researcher back in
2008 and Eric said that, “Well, we have this driving simulator, and would you like to work on
some attention stuff in the car?” I said, “Well, yeah. That’s amazing.” Because I worked on
attention management in the desktop, and so it was just a cool
way of moving domains. So yesterday when I was
putting these slides together, I went back to some of
the old slides that we had, and this was because I felt that we needed to
start this discussion, this workshop, around
the topic of safety. When I was listening to people give their one-sentence desire from what they wanted to do in
their car, safety was predominant. There were six people who said
that they wanted to safely nap, but I think that everyone
else said that I want to be safe in the car
and do X, Y, and Z. The fact of the matter is that driving is no longer
a single attention task, and hasn’t been a single attention
task for quite a while. We even if we’re not doing
something in the car, there are other people in the car. We’re talking to them. Many
of those people in the car are actually not really
attention aware people. If you have encountered any of them, my kids are a good example
that they have no sense of when is a good moment to interrupt
when a person is driving. But there’s also these devices, and these pictures are
dated because I pulled this slide from 2010. But what I did update
this morning or well, yesterday is that there hasn’t been a lot of changes in
terms of the legislation, and this is the June 2019
distracted driving laws. There are no cell
phone use at all for novice drivers in 39 states, and this number has
been the same since 2014 because I changed
the slide, I know that. No cellphone use at all for bus
drivers in 20 states, we have 50. So I mean, there are 30 other states that are still falling behind. Handhelds are banned for
all drivers in 19 states. I think that changed from
14 to 19 in the past five years. Note that it’s handheld banned. You can still talk, you
can still use Bluetooth. Text-messaging is
banned for all drivers, and that is in 48 states. That was 44, four more
states joined in. No state bans all use of cell phone for adult drivers went
driving. Now, keep that in mind. That means that
still legislation is allowing us to do things in the
car via cell phone, via Bluetooth, and car companies
are picking up on this. If you think about
the in-vehicle assistance, they’re now coming into the cars, and the goal is not to make
us more distracted I hope, but it is with the desire that well these vehicle assistance will
help you do things in the car. Now, what I haven’t heard
a lot is about the safety. How are they going to make
you also be safe in the car? So there’s some unique things
about the car environment, and I’m going to just go through
like manual, semi-autonomous, and then autonomous vehicles, because I think lot of the things
that we’re going to discuss today are going to eventually applied the most in
autonomous vehicles. But what we have today as
the majority is the manual car. In a manual car which is basically the cars that regular
people like I drive. Driving is your primary task. It’s a continuous attention task, and sometimes we interleave
these other non driving tasks. It could be fiddling with
their in-vehicle control, it could be talking to someone
over the phone via Bluetooth, it could be talking to
other passengers in the car. Now, when you have
semi-autonomous vehicles, that is when the driving sometimes becomes your
secondary task because now, it allows you some cycles to
get involved in other things. I’m thinking mostly about
the level three and level four cars which are mostly
driving themselves. But they do require you to
take over control at times, and that means that you have to
be aware of the road situation. You can’t be totally zoned out. Then there are autonomous
vehicles which is where the car will basically
become your chauffeur driven car, and you’ll be going from
point A to point B. So it still not your office. It’s still limited
attention environment. It’s a moving vehicle. So even if we think about doing
task in that environment, things like motion sickness, things like being in
a clustered environment, all of that comes into play. So which means that we
have to think about designing tasks in a different way. So I want to quickly go over this because I have billboard
on this a bit, but the reason why
non-driving task in a car can be unsafe because of the improper
attentional distribution. People often underestimate
the cognitive resources that are required
when you’re driving, and you’re adding
an additional task on top of this. Driving as a continuous
attention tasks, you typically don’t want
your non-driving task to be also continuous attention
because then you’re competing and then at some point, you might just deplete
your cognitive resources. We also choose improper moments
to engage in secondary task. So if you’re in a busy road with
lots of traffic around you, it’s not a good idea to start
something new which is not driving, and also not understanding
when to shift focus back. So all of these we saw as
opportunities to think about how can you let drivers
be more safer in the car? In particular, keeping in mind that non-driving tasks
are going to happen. A conversation even via Bluetooth, that is still going to continue. So how can we make sure that we
maintain the driving safety, but also allow people to continue or complete some
of these non-drivings task? Remember, safety is
our key point here. We are just looking for
opportunities where we can allow people to do these tasks
without compromising safety. So we looked at the questions
around when, what, and how. I’ll come back to
the assistance later as we go through but as Eric mentioned, our premises here, the studies
that I’m going to touch upon, are going to be all
simulator studies. We’re not doing anything on the road. In most cases, hands-free
phone conversation is the secondary test that we
were looking at in these studies, and there was only one
study where it was a semi-autonomous scenario
and we looked at typing and video
captioning as that. There, the goal was to see how easily people were able to
switch back to driving. Okay. I probably wouldn’t
have enough time, but I will be happy to bring this up during
the discussion section. But there were four studies
and the first one, which Eric had hinted at, we wanted to know when
is a good time for someone to initiate or engage in
a phone conversation and also, what does that
conversation look like? The second one, we
wanted to see that okay, so once we have an idea about when should we even think about
engaging in a secondary task, how can the car help the person who was engaged in one more task on public driving to be alert
and vigilant on the road? We also wanted to see that okay, you’re having a phone conversation, there’s someone on the other end, can we do something or share some information with the other
person on the other end, so that they can also
help the driver be safe? The final one is that we
wanted to look at the role of audio alerts in
a semi-autonomous scenario where the audio alert can help people get out of their secondary task when the car is still
in autonomous mode, and then start gaining awareness
about the driving scenario. So very quickly. Let me go back, because I do know that
we don’t have time. So how is driving performance affected by
a simultaneous phone conversation? We were looking at three types of conversations or
three types of content. One was when a person is asked to give step-by-step road direction to someone else when they’re driving. The other one was they were asked
to recall some information, and the third one is that they
just listened to a news headline. We also want to see when or what types of roads was
this being impacted by? So people who are driving
on super simple roads, there’s no other traffic there, then they were driving on roads
where there were a lot of traffic, and third is that there is something unexpected that
happened on the road, so people have to be able
to correct for that. So again, I’m going to
quickly drive to this, but what we found was interesting. So Eric and I had
this hypothesis is that clearly, the direction giving task is going
to be the most difficult one because that’s the most involved one, is that you are asking
someone to give me directions from your house to
the grocery, and I’m driving. So there should be conflict in what I’m doing and what I’m
doing in terms of driving, and what the task I’m asked to do. Turned out that people were actually breaking the task
into smaller chunks. So they would say one sentence. They would pause, they would make
sure they’re driving properly, and they would say
the second sentence, and they would just basically
micro-task this whole conversation. In contrast with the other task
which was recalling information. These were information
like when did you last put gas in your car or when did
you go to the grocery last? People had much more
difficulty in those task. What we realized that whenever you needed sustained
attention for a task, that was much more
difficult rather than task where you can actually
chunk it and break it down. So the next direction was mediation, and we wanted to see that if a system that knows about the
driving complexity during driving, can it effectively intervene? It’s basically, can the system alert the user at a point so that
they can focus back on driving, if they’re having a phone
conversation during driving? Okay. So there were three things
that we had looked at. So we had looked at providing a very general message
about okay, pay attention, and then the other one is that
there is a roadblock coming up, you need to start paying attention. The second one is that we
would put the call on hold or we would let the call continue, and see how people would just
basically taper off by themselves, and the timing of this intervention. So would we interrupt
people right in the middle, or should we wait until they
reach some break point, let them finish that sentence? What we found is that
interventions that had more explicit messages that told
people exactly what to expect, so those were more effective
which makes a lot of sense, but we were worried about the load that it might put on people when
they’re trying to process that. This was traded off by a slowing
down in the conversation. So people would actually slow down in their conversation when they
would get these alerts. Not so surprisingly, drivers
preferred the intervention, because it directly impacted them, the callers on the other end, they really didn’t
have any awareness, or they didn’t really care
all that much though, they were drivers themselves. It basically means that
we need to educate people on the danger of
driving and conversing. So I know that I don’t have time, but I’m going to quickly
talk about this one, because the last result
where the colors don’t have, really they didn’t seem to think that the intervention was useful
because the call was put on hold. This is where we thought about okay, let’s share some information
with the callers, and see how they regulate
the conversation. We found that the callers actually, if you shared goat sounds with the callers during the conversation, they would actually
regulate the conversation, they would speak in
shorter sentences, and they would take on the load of conversation on themselves more. I am going to skip this part
because I know that I’m over time. I’m going to just quickly leave
you with this rather dense slide.>>Then we can leave it up>>Yeah, let’s leave it up. I kept it there because
I thought that it would be good for the discussion if
people are able to read it. But I think that
we’ll just stop here.>>I hope you don’t mind. We’re going to stop here
just because I would like to get some feedback from the audience.>>Yeah. So that is the goal.>>Yeah. So please, go ahead and start writing
and putting up those, and then John will
walk, around and try to [inaudible] I see that
Dawn has his hands up.>>Are we now in the discussion?>>Yes, we’re now in
the discussion phase. So I’m going to go ahead,
and do discussions, and then about a minute or two before I may have you
go, put up your slide. But for now, yeah. Go, Dawn. Microphone, we need
the microphone. Yeah.>>Don Norman, what I’m
curious about is the timing. So in the first study, you talked about it
on the cell phone, and there’s a warning. As you know at 60 miles
an hour in once second, we go 90 feet. So what I’d like to see is a simulator study where
some unexpected event happens. How much time does it take people to be able to start their response, and we could do this even in where you have an expected response,
so you will alert them. Then I still want to know how many seconds it takes them before
they could actually respond. Have you done that study?>>So we haven’t done
that study per se, but the way that we
set up the alerts. So I mean, we had
told people that you are going to drive
in 50 miles an hour. So that’s where we had constrained it that you can’t drive a 100 miles. The other one is that
we provided the alerts with enough time so that
if they listened to it, and they hit the brake pedal, they should be able to stop in time. What we saw was a lot of
slamming down on the brakes because one is that
it’s a simulator study. So they are really not
worried about killing people, but no one also wants to
kill someone in a simulator. But there was a lot of slamming down. We didn’t see a lot of
slowly pacing down, but one thing we did see is
that people, in general, they would drag below the speed limit because they knew that they were
engaging in the secondary task. So they would almost correct
themselves ahead of time, and they would drive slower so
that they will be able to react. But I think that’s
a great point about figuring on what is the safe stopping
distance and all of that. But I believe there had been other studies that have
looked at that. Yeah.>>So I’m a little curious about the intervention that you described, and that was you would put
the call on hold, right? Have you noticed that
people would perhaps react differently to being placed on hold where all of their
cognitive attention would go towards
“what just happened?” and completely leave the road away. What was there any adverse effect to be doing this kind
of intervention?>>So it was interesting because we thought that
the call on hold would be, I mean, one is that it is jarring. It did give you a message at the beginning of the call
has been put on hold, and so this is something that both the caller and
the driver could hear. So for the driver, it made a bit more sense because
they were in that environment. They could see what was coming up, and so they could easily
understand what the reasoning was. For the caller on the other end, they had no clue until they would
suddenly abruptly be cut off, and then the driver would start
speaking again and so for them, it was actually not
a super comfortable experience. But yes, when they didn’t
put the call on hold, we did find that halfway
through the conversation, the driver would stop
and say that wait, I need to take care of this. So I mean, maybe there
were better ways of smoothing people into that whole
situation but at that point, we just wanted to see that,
okay, let’s see what happens. We just turn off the call entirely.>>One thing before I go to the next person is as you’re
thinking of the ideas, I’m hoping that Shamsi’s talk, generate some ideas for
maybe some other projects, or other things that you can
do in the driving simulator, please write them
down, and post it up. Okay. There was another
question on this- yeah.>>So this is related
to Dawn’s comment away, it’s switch cost and in part, I mean, there’s always the question about the environment in which they’re set, so they actually are set to be preparing themselves
to switch between tasks, and part of the challenge for actual drivers is that
they don’t do that. So they’re not engaged
in a phone conversation with a question in mind, okay, I’m monitoring two tasks
on an ongoing basis. So they may be in a different cognitive state with a task that is
characterized that way.>>I think the most relevant, and this is the study
that I did skip over. So this was a study where we’re
in a semi-autonomous scenario, and so we were looking
at what happens when we provide alerts to people
way ahead of time. Right now, I believe
Tesla gives you an alert. Eric, I’m not entirely sure.>>It’s not airtight.>>Okay. So what we
wanted to see is that let’s add another alert
like 30 seconds before, and there was literature that shows
that it was about 22 seconds, and just to be on the safe side
that 30 seconds before we will start alerting you that there’s something
coming up on the road. These were for all predicted
things that are going to happen. This is not for situations where somewhere suddenly
runs into the road. So we constraint as that way. But we wanted to see that
we start from 30 seconds, and we tested with
two different types of alerts. One was like a bursty alert
where before 30 seconds, and then at 15 seconds, and then at 10 seconds, and then
you get your Tesla style alert. Then the other one was you
get this increasing pulse. So it starts pinging you or starts
like this pulse at 30 seconds, and it gradually increases
in its pulse rate, and in its intensity. We found that people
were reacting better to the increasing frequency, and they were able to actually leave their task that they were doing at a reasonable state because otherwise, they’re just dropping
their phone, and they’re immediately switching over. Here they were able to
smooth out of their task, and at the same time, get themselves
prepared for taking over it. So maybe in a
semi-autonomous vehicle, it makes sense to not just
have one alerts, but multiple. Did that answer your question?>>So one thing is just
practically whether or not you can get to the relevant information, but the other thing is
just the cognitive state. One of the challenges for
the drivers is that they’re not in a state where they
are saying, “Oh, okay. Let me attend, see
when the alert comes, then see when I got to respond.” So it’s a complicated thing
to do in a simulator, but having them in a state which is typical of what the driver’s state is rather than one in which
they are experimentally set to respond differently to those. That’s a hard task,
but it’s [inaudible].>>It’s a hard task and so
that’s one of the reasons why simulator generalizing
from simulator studies to the real world is always
difficult and of course, we have constraints around how
much we can actually test out, and especially at Microsoft. I mean, we haven’t been able
to move beyond the simulators.>>So I think for me, one
of the questions is I think we have very different
driving scenarios. People going for hours on
the motorway versus the city, and I think what we have done
in research over the last years is very often not making
very clear distinctions. I think some of
those automation tasks are really easy on the motorway, and I think also
looking at the timings, we have very often this
running it’s 10 minutes, running at 15 minutes, and I think this is typically not giving you the problematic stages. So usually, if you
look at statistics, it’s after hours of
driving and I think, so my question is if we do set up these academic
exercises, we publish, how much they really translate into a transformation of how we
really can use the car? So my question is really should
we not rather really look at what are these things which
are difficult and basically, there’s a lot of handover things
which are tricky, and my question is maybe
more useful to think, what are there things
we can automate? Be it the motorway with
additional infrastructure, and the seems we cannot automate, and not try to mix them at
least for the near future.>>Okay, great. I’m going
to ask Andrew to go ahead, and get set up for the next one. Is there any other comments?>>Just to Albert’s question,
and as Andrew setting up.>>Oh, yeah. I’m sorry. You go ahead.>>So I think that’s really
a very important point. I mean, I as an interruption
or an attention researcher, I always worry about trying to generalize too much
from simulator studies, but I think that this is
a great discussion point for this group here is thinking about
how can we design these studies, or how can we move
the knowledge forward? Also, it’s not viable to think that many of these things that we can
do out of test out on the road. So can we determine ahead of time, these are the things that we’re
not going to be able to automate, and these are the things where
we see some opportunities? In my later talk today, we’ll talk a little bit
about that. But, yes. I think that one of the things
that I do want to make very clear is that our goal first and
foremost is the driving safety.>>[inaudible] say something
about the duration just sort of four hours [inaudible]
because I haven’t seen much for [inaudible] on
[inaudible] as sort of, you have people hour in the lab as you can [inaudible]
because it’s easy that people [inaudible] you got people getting four hour in the lab
[inaudible] three hours.>>I think it’s
a fair point to discuss how we can design
these kinds of studies. I know for sure that there are some studies that look
at mind-wondering that happens when people started to
get boring when they’re driving. So in those cases, if you
actually strategically placed some tasks that makes
people more alert, and that’s the reason why
we listen to music or listen to news when we’re driving for a long time and no one else is in the car who can keep me alert. But I think that that’s also
a fair discussion point, because I really don’t have
an answer for that either.>>Well, thank you Shamsi. Thank you very much. So
please put up your ideas, just some couple of things. We’ve talked about switch costs, the switching of the costs associated
with attending to the road, to attending to other things. So this idea of task switching
might actually generate some ideas. Actually, capturing
the appropriate cognitive state, as well as the value of capturing information in the simulator
versus the on-road, and what is the difficulty
about looking at trying to get some information
on safety in that respect. Then also, looking at what
we can and cannot automate. So maybe that might help
generate some ideas, and I bring you to
my next speaker, Andrew Kun. So you have 10 minutes, go ahead.>>Am I on?>>Yes, yes, you’re on. Go ahead.>>[inaudible]>>Thank you, Linda. Okay. So what I wanted to talk about today is
an overview of where we stand. I was really curious if
I could figure out what the state of the art is, and I know, we know from Eric what the state
of the artist is for him, and then just what exactly do
we have on the roads today, and what is the outlook. So I wanted to start with
this really nice figure from a paper by Stevens and his
colleagues from this year’s Chi, where they compare what today’s travel to work
and back looks like, to perhaps what it might look like, say 25 years from today, if we have decent automation
in place for vehicles. So what you see at
the top of the figure is the distribution of
time for today, right? So you have breakfast at home,
then you travel to work, then you do your work nine to five, travel home, relax,
dinner, and so forth. What we might be able
to do if we have better vehicles that allow us some automations and freedom in the vehicles, this is
the vision, right? So perhaps, I don’t know, are we going to have
breakfast in the car? I’m not convinced, but
you might be able to eat, you might be able to do
some work in the vehicle, and then both going to work
and perhaps traveling back. You might be able to relax. This is going to come up
later in our work as well, and I wanted to point
out a couple things. Look at the travel time,
for example, right? The travel time is
extended because you feel so comfortable in
your new vehicle that’s automated, that you’re willing to
travel further from work. So you’re willing to
move further away. Notice the fact that work
has extended, right? So we have more time for work. This is by the way, a tricky point. This came up in various conversations
throughout this Faculty Summit. Is that a good thing or
is that a bad thing? Is this a way that we’re
going to try to make work creep into people’s lives more, and more, and more, and
take advantage of them, or is this someone said I would
like to finish my Chi paper. That’s our perspective, right? Give me some more time so I
can write that Chi paper. So that’s a valid, that’s a concern. However, overall, do
you notice that with the vision is that the total time including travel and work is shorter? So this person in
this scenario leaves at 8:30, comes back at 5:30,
but leaves at 8:30, and is back having
relaxed by 5:00 pm. So even though they worked more, the total time spent is less. So we can definitely discuss
how exactly this works out, and what exactly
the distributions are, but there is some interesting hopes
and interesting potentials. This is the state of
the art though, right? At least from my perspective. So this is me sitting on a bus with Orit and writing
a workshop proposal. So really, what I’ve perceived to be
the state of the art is that really I’m going
to take my laptop, and it’s going to be on my lap, and I’m going to work. I did find one interesting, what I think is the state of the art. So this is an Uber
commercial and maybe I can play it. Yeah, there it goes. So this is a commercial for an upper, higher-end service, sorry, and
this is what they’re saying, “Look, you can sit in
the back of the car, it’s going to be very comfortable.” What exactly are you
going to be doing? Calling, there’s going to be an iPad, what do you call it, a tablet, and a computer, and these are the things that you’re
going to be working on. It’s comfortable and nevertheless, that is sort of the state
of the art of what we can expect to be doing. I want to say that John Lee pointed out to me
that in one of these frames, there were two people
sitting next to each other, and it was supposed to be
showing two different cars. But that’s also an interesting
question of, I think, privacy came up in one of the comments when we
went around the room. If you do travel with other people, even if it’s comfortable
and you’re working, how does privacy play into this? Here’s another promotional
picture for Uber. So Martha Stewart promotes Uber, and this is the vision for
Martha Stewart’s perspective. I really like this photo for a
couple of different reasons. One is that it shows her doing things that you’re not
doing right now in the car, and actually cannot do in the car. So Martha Stewart is preparing
some sort of a very fancy-looking, some sweets, right? They’re
very fancy-looking. This is what she does anyway, she makes us feel kind of bad about how we can’t do this so easily. But the truth is you are not
going to do this in the car. It’s going to be bouncy and bumpy, and it’s not really. But again, let’s think about what
we could be doing in the future. That’s what I think is
interesting about this. There is also a printer and that actually is
starting to be something, you can sort of see
something like that. So what I like about this photo is that it encourages me to think about things that we’re
not currently doing, not just like that commercial before where you’re are on the laptop, but what other
interesting things might there be that could
be possible to do? I think that’s worth thinking about. Then Orit pointed out to me that
Martha is looking out the window. It’s probably a feature that
was supposed to be artistic, or its point is that a lot of
us get motion sick, right? The only way we can handle working in the car is to frequently
look out the window. So again, as we think
about automated vehicles, this is something that
we should consider. I really like this paper too, Bastian Pfleging, who was
Albert’s PhD student, and a couple of colleagues wanted to see what people are interested in
doing in automated vehicles. But the part that I’m showing
here is what people currently say they are doing in automated
or in manually driven vehicles. So this is a 300 participants
who completed the survey. When they drive, they call and
text in pretty large numbers. From the paper, I can’t quite tell, I’ll have to ask Bastian what it
means to text in this context. Do they mean, I’m stopped at
the red light and I’m texting, or do they mean something else? Do they mean voice interfaces? But nevertheless,
these are the types of tasks that they’re
undertaking right now. Passengers do all kinds
of other things as well, including e-mail, including whatever
they call office tasks, right? So I think it’s really
interesting because this does capture something about what
people are doing right now, and this is going to be something that Orit will talk more about from our own view of this on through
this NSF funded project, the next mobile office. So the five of us, and you can talk to us about this project
throughout the day, but the vision that we have
is something like this. So I was just talking to Yvonne about our need for
the next version of HoloLens, but what if you were that driver and your car can’t take over for some amount of time,
maybe it’s five minutes, 10 minutes, whatever it is, how could we support you in doing work beyond having
a laptop on your lap? We think that as far as the human machine
interaction is concerned, Augmented Reality,
Speech Interaction, and tangible Interfaces
might be the way to go. So I’m showing
Augmented Reality here. This is one of Orit’s students really nice work here, showing this. Then some sort of
a speech interaction, and some tangible interface, something physical,
but probably small. We don’t necessarily
think that a laptop is something that’s all that great
for this for multiple reasons. One is that you have to look
down, but the other is, I’m really curious
what everybody thinks about how long these handovers
are going to be interesting? Cars will eventually say, “Look, it’s your turn,” and
it’s actually probably going to be often, right? The handover will have to happen, and that’s probably
going to be short. One major manufacturer last year was promoting a vehicle that isn’t
quite doing this right now, but they said, “Fully take
your hands off the wheel, we’re going to take over driving.” The time they give you the
come back is 10 seconds, and I don’t think you
can stow your laptop in 10 seconds safely, right? So I think this is
interesting to think about. I think it’s really important
we talked about safety. It’s critical,
this handover will happen, and then whatever is there to support your work at the very least
has to get out of the way, and ideally actually can
support you in driving. So if you do have Augmented
Reality that can be something that can
help with navigation, for example, or helping you with finding street signs
and so forth. I wanted to quickly put
up a couple of things that I think would be
worth considering. These transitions, this
is work with Shamsi and actually led by
a Christian Janssen who was a Duncan’s [inaudible] student
and his colleague Stella Donker. The idea here is the following transitions between the non-driving task and the
driving task are not just simply, we’re doing one and then we’re doing the other and we are coming back. But rather there are stages. So what the stages are and
we can argue about that, the details of that, right? But you will be going through some stages in terms of
handing back and forth. Importantly, there’s
going to be interleaving. Meaning that while you’re doing the your non-driving task and you
start to do your driving task, you might come back to
the non-driving task, can go back and forth. This is a safety concern, right? This is something that has
to be designed properly. Sort of, for example, the actual physical transfer
of control doesn’t happen after suspending
the non-driving task, right? So you might not have suspended
the non-driving task, you might still be engaged in that non-driving task even after
you’re driving has started. I have two more slides with that, so it’s really quick now. I knew I was not going to run over in time because she won’t let me. One paper that I thought
was really very interesting is this paper of Magnusson and Pope and it’s a review of biomechanics
and working postures. One really interesting thing
about this to me was that they found drivers with
lower back pain, right? Especially if the drive
is long and there’s lots of vibration as well as
musculoskeletal problems; neck, shoulder, arm, especially if your arms are say not supported. What we’re talking about, think
back to that first slide, extended drives because you’re so happy that your car can drive itself. But there was there is this concern that since you’re driving longer, now you’re introducing issues that could hurt you physically, right? So I think that this is one of
those things that we really need to be concerned about
as far as our design. To wrap up, today we have
simple tasks, right? I think the state of the art we can argue is that for
the vast majority of us, the best we can hope
for is that will be on our laptop or some sort of a pad. We have high hopes for tomorrow, but as we look at tomorrow, we should really look
at motion sickness. We should look at privacy and how we become partners in privacy as John
I think really nicely put it. Then safety, I think it’s really interesting to look at
this interleaving between tasks. Then ergonomics we
must worry about that.>>Great. Thank you Andrew. So we’re going to go ahead and
go into the discussion portion. I’m going to ask
our speaker to repeat the question once you hear it, I will now open it up for
comments and discussion.>>Jim Pinkelman from
Microsoft Research. Are you aware of
research that approaches a problem by sense and the cognitive load that
each of our senses require? So for instance listening is very
easy and very low cognitive. We’ve been doing that
for decades in the car. We’ve been multitasking listening. We’ve started to talk and that’s a little bit more of a cognitive load once I get into visual and
touch actions in the car. Are you aware of any research that there is relevant thinking
along that dimension?>>I think you’re hitting
it right on the head. I think a lot of us have
been thinking about that. I think Shamsi in fact just mentioned that in her presentation as well. So as far as Cognitive Load and Eric actually
mentioned this as well. All right, so in terms of
multiple resource theory, I think a lot of us
do think that this is exactly how you should look at
it and hope to tie that end.>>I have to do a correction. This is not exactly how
you should look at it. There’s been a huge amount of
research on this and actually Alan Baddeley published a
paper like a century ago, in which he had people driving and they listened
to the radio station playing say the news
broadcasts or they’d listen to a radio station with
people playing football. This is in England, so it’s soccer. Listening to the soccer game
really wrecked the driving performance because
it’s a visual task, right? They’re listening to a visual thing. So they’re listening auditoraly. So the point is there’s a lot of
literature and it’s a bit more complex than simply one sensory
modality versus another.>>I would like to add something to Don’s point which was
interesting because when we had designed
our initial study about giving the road directions, the first one that I had described, where people were trying to either recall information or
give road directions. Eric and I hypothesized that
the road direction task would conflict much more with
driving because that is also visual. When I give directions to someone, I visually think about the roads. It is complex because I believe
just the fact that people are breaking it down made it somehow
cognitively more manageable, where when they were trying to recall information maybe they’re also
trying to visualize that. That when did I last
take gas in my car? Well, I have to visualize when I
last went to Costco or wherever. So I think that it is not as simple as just having
the speech or listening, as soon as you start visualizing it and as you mentioned soccer is
a visual thing to think about. I mean Eric has examples about
visualizing an airport’s, what is that thing called? The display board.>>[inaudible] Sorry. Of
the actual schedule screen until someone shouted stop because
actually I was visiting Irvine, after giving a talk down at UC Irvine and all the grad
students shouted, “Stop”. I was going to a dinner. It’s like this visual U ahead evaporated and I could see
through the windshield again. Sort of the surprise,
all the professors in front of us stopped
their car quickly, we would have taken them out. So the [inaudible] comment. Even beyond football versus, you have to be cautious Don, because the football versus the news reports and
the music is I think, unfortunately, leave
this marginals there. Some over but its too
high level of mean more detail. The fact that we found and
it’s just one study we did, that memory tasks recalling
when you did an oil change people wouldn’t lose their track
on the road and be in trouble. With a memory task we don’t
know what combination of visual might remembering, what it was getting in the way of their being able to navigate and be aware and sense in that situation. But the theory we came up with and
this in the paper getting into the detailed structure
of how cognition works when it comes to giving directions versus
remembering oil changes, wow could you really put stuff on the stack and push and pop the stack about directions with
memory when you need to attend and still we remain alert. You can’t do that with
this gigantic call, who you’re waiting for the answer
to come back but the oil change. I really want to suggest to this community of folks
that it’ll be great to get into the right level of detail and then to extract from
that versus start with high level.>>With agreement,
the important point is the word simply does not apply.>>So I really, okay Don, go ahead.>>I just wanted to pick up. You’re talking about
basic current-day activities and you’re talking about
like Bastian seven. One thing I can’t
help but notice, UCL, the office is in central London, but we live like 50 miles
away towards Cambridge. So commuting on a train
going to the office and you can’t help but notice when we get onto
the busy commuter train, the patterns of people’s
natural behavior, right? I think one of these things
is just starting to stand in the way people
work or want to work. So picking up on what
we saw [inaudible] work on the opening down Wednesday, the types of work
that people do when, it’s very clear in the morning, people are getting their head
down and doing focused work. There’s a lot of activity. You can hear it with
the fingers tapping. People are preparing
slides, writing e-mails. In the evening coming
home, there’s no work. There are no laptops out. If there is a laptop out, it’s somebody watching a movie or
playing a game on their phone. There’s very little activities. It’s kind of funny
just to understand. I think one of things
with don’t understanding more about the types of activities people are doing when
and how that fits into their daily schedule
is super important.>>I’m getting kicked
off but I totally agree. One thing that we’re going to
hear soon about is what is it that people would like to do and
I think that goes along with. Then I think it’s really important what everybody is
saying including Eric. Is what is it that tech can provide and I think
that there’s this push and pull of lets not design
things that people don’t want. But also let’s kind of inform people’s desires with
what is possible and that includes cognition as well as the tech that provides
that human-computer actions.>>[inaudible].>>And totally agree.>>So maybe one last question
or last comment.>>Okay. So my last question
about privacy. So you mentioned about the
[inaudible] letter about privacy, but also wondering what do you think about the privacy when
self-driving cars are collecting all the different kinds of sensitive data because
maybe a privacy violation not just for the drivers, the passenger but also for
the other cars and the pedestrians.>>I think this is a really
complex question and I don’t know perhaps at some point John might be a better
person to answer that. But yeah, I’m aware
of the fact that it’s a complex question
and that’s the extent of my, but I don’t
know if you wanted to.>>[inaudible] Just to
remind people who’ve had these great comments
that were just shared, write them down and post them up.>>Yes thank you. I was just going to remind people to
do the same thing. So just like a couple
of comments that I’ve heard I just want to
summarize real quickly. Privacy issues where something
that we didn’t talk about earlier but it’d be great to
maybe capture some of that. The user interface and what type of system and how it should look
is actually really important. Then what are people actually really interested in doing and
then may be actually looking at some of the micro levels and trying to
understand some of the details out of things that people actually want
to do in their cars. Then taking a look at that in
the context of the large picture.>>Sorry about that.>>There’s a wire right here.>>So it’s my pleasure to introduce our next speakers which
is Orit and Raffaella. I’m going to take a, just a second. Let’s clap for Andrew first. Thank you, Andrew.
Okay. Go ahead, Orit.>>Can you hear me? Is
it on? Can you hear me?>>Yeah.>>Okay. Great. Thank you. This is great to join the discussion now that we’re
starting to think about attention productivity with
concrete findings from a preliminary study of what
people want to do in the car, and what people hope
to do in the car. So this is work that I’m
conducting as part of the NSF Funded Research
together with our team. I will present this together
with Raffaella Sadun. Raffaella is joining
us online. Raffaella?>>Yes. Hi, everybody.>>Great. I’m a Professor of
Computer Science at Wellesley, and this is a very interdisciplinary
perspective of a work. So I’m going to start. It’s no going to work. Okay. So I’m going to start by
thinking about a methodology. So one of the goal for
NSF Funded Research is to identify the needs of commuting
workers and managers. We have three different stages for, not necessarily linear,
for our methodology. The first one is to
study existing data. Specifically, we’re looking into
the American Time Use of Survey. Raffaella and her team, John and his team are
analyzing these data, and looking for trends that relate to commuting with workers and managers. We’re conducting a new large-scale
time-usage well-being study in order to understand what
users are currently doing, drivers are currently
doing in the car, and what they’re hoping
to do in the car. We’re planning to
follow up with a series of in-depth ethnographic studies. Today, our focus will be
on the large-scale study, and the preliminary findings
from this study. So here we see a snippet
from, that’s not good. Ed. Okay. So here we see
a snippet from our studies. This is a new survey that
we designed that is based on the executive time usage study, that Raffaella and her team, and been using on an international
group of executives. What we ask our participants to do? We asked them to enter a full agenda of
our representative work day. We asked them to enter all of their activities
from the time that they wake up to the time that
they go to sleep that last more than 15 minutes. So the granularity that
we get is activities, personal activities, work
activities, some qualitative data. We modified this
questionnaire to give us more granularity around commuting. Specifically, we’re interested
in multitasking in commuting, what non-driving task users
are currently doing, what non-driving task users
are interested in doing. We also added some questions
to the study in order to measure the appeal of
different alternative to commuting. So what will people do if
they don’t have to commute? What will people do if they
commute in a self-driving car? We looked at morning versus
evening commute separately. So we deploy this preliminary, and these are the results
that I’ll present today with a Mechanical Turk samples. So this is a sample of
220 people, 40 percent female. We got a variety of backgrounds. We had of 148 people that reported to be commuters
out of these commuters, 126 reported to be driving
themselves during the commute. For these commuters, the mean
commute time was 60 minutes daily. So this is the sample that we’re working with.
Raffaella, are you back? Not too sure. Okay. Raffaella, are you back? Raffaella, are you online? This will be a good time
to get her. Okay.>>We’ll take Raffaella
back momentarily.>>Okay. So I’ll just keep going. So looking at the general timeshare of activities throughout the day, we see that commuting is just under
eight percent for the sample. Not surprisingly, we see that commuting activities are concentrated in the morning and in the afternoon, and that they’re
interleaving together with personal and with work activities. We need to find out
more commuting is associated with less time spent in
personal activities, and that shows us that
commuting is really substituting for personal time, and at 45 percent
among our commuters. Raffaella, are you back? Raffaella?>>Yes, I am back. I am back.>>Yay, just in time. Okay. So I’m just saying
that 45 percent of our commuters associate commuting
with negative feelings, and specifically with
tiredness and stress. So this general overview
as I mentioned, what we were really
interested especially within the context of the last two days
discussion on productivity, is what people are
currently doing in the car. When we ask what people are
currently doing into car, we talk beyond driving, so what are the non-driving things that people are currently doing, and what are the non-driving tasks
that commuters will be interested or desire to
do in a self-driving car. I’ll ask Raffaella to take
us through our finding, what do people do while commuting?>>Yeah. Like what Orit just said, it’s not surprising that
a lot of what people do while commuting relates
to their personal time. This sense is telling
us that commuting is the way through which people take back that personal time that is lost going or
coming back from work. What is really interesting, and what I think we are trying
to get out with this data, is that there are strong differences between managers and non-managers. Which suggests that in parts
what you do while you are commuting depends on
your socie-conomic status. It depends on your type of job. For example, managers might
be more interested or more needing to do interactive work
which can be done in a car. Now, the second benefit going to the type of activities that
people do while commuting. This Time Use survey, the second benefit that it provides, is that it allows you to go deep
into the secondary activities. Here, I would say
it’s not surprising. It’s nevertheless very scary to see that a lot of
the secondary activities, what secondary activities
that are happening inside the car are things that are from
the not very safe at the moment. For example, reading e-mails
and replying to e-mails, which are in the the words
of the speaker before us, active types of task. There is also a fair amount of passive cognitively and
perhaps left demanding, thinking and reflecting which doesn’t require so much multitasking.>>Raffaella, just for this audience, I would like to point out even though these are
the things that we found less, but people are programming and
analyzing data in the car.>>Yeah. That’s right.>>You don’t see some
of the audience, but some of the audience is
shaking their hand in disbelief.>>Yeah. That’s right. Well, they were incredibly honest
with us in saying these things, but I’m going to make
sure that I’m not in their city when I travel.>>Yes.>>One of the things
that we really are trying to push with
this line of research is to try and think about the experiencing commuting as part of the whole working activities. So this is one going
to what people do right after and before they commute or the differences between
morning and evening. We’re not going to go
in detail in this, but the answer is we do find that there are different patterns of
activities as you would expect. I think it’s important
to understand how commuting fits with
that act of the day, and also thinking about
what people have can shift in an autonomous vehicle
when the technology is ready. So for example much more thinking, replying to e-mail in the morning much of that in the activity
just before commuting.>>Hi Raffaella, just one second,
I’ll jump to the slide. I was just showing
the general graph that shows that if people can drive
a self-driving vehicle, then we see that their
interest in working is increasing compared
to the current commute.>>Absolutely.>>They still mostly
interested in doing personal stuff while in the car, but there has an increasing
interest both for workers and for managers to do work
in a self-driving car, and now I’m moving to this slide was a specific differences
that you can describe.>>Yeah, and I think
this is another of the carried flight in the sense that you ask
people what they want to do, and they answered you, they just want to do more e-mail. So I think it is and last
thinking and reflecting, you notice on the right. So this, I think there is some interesting questions for
the technology development. In sense that, one question that we have as
the team is for example whether a new technology should try to preserve personal time
or increased work? Whether they are way
for this technology to support the transition
from and to personal time? Also within working activities
whether we should try and increase the efficiency
of what people are currently doing and demanding
to do for example an e-mail, or if they RE an
opportunity to develop technologies that can have protect
more thinking and reflecting.>>Right.>>They’re thinking of this
in terms of also collecting more data, and finding way, you really think it
would be important to explore this question in larger
samples of knowledge worker to really pin down the type of correlations that we
are finding here and I first don’t read who will discuss the specific aspect of technology.>>Right. So specifically we’re
interested in gathering more data, and as Raffaella implied we
think that what you do for work, the social economic status does matter in terms of what
you want to do in the car? So specifically we
would like to focus on knowledge workers which
bring us here to Microsoft. We hope to collect more data with a large sample of people
that are driving the car, people that perhaps are driven, for example people that commute
with Uber on a regular basis, people that drive in cars that have a strong automation
features like Tesla, and people that uses something
like the Microsoft shuttle. So different modes of transportation
for knowledge workers. In terms of technology design, I want to show a quick video. I know that someone wants
to help me with the video. A quick video of some of the ideation
that we’re doing in our lab. This is ideation that follow up the nice sketch that Andrew
was showing thinking. How can we take the tasks
that people want to do the task that people
are currently doing and design for simulator
environment brainstorming of how these could be done. So here we’re exploring different
modalities of interactions. [MUSIC]. So what if we let them
rehearse presentations and what will gesture interaction in the car free hands
would look like? What would it look like with allele controller combination of tangible input together with voiced? [MUSIC]. How the transitions
would look like when you have to now put
your little tangibles aside and go back to
driving how the windshield? Which we assume I’ll have Augmented
Reality capabilities were clear up to support the driving task, but at the same time
will allow you to wrap up on what you were doing
because we don’t want users to have this lingering effect of still being focused on the work.>>Thank you very much. I’m going to just cut this a little bit short. Yeah. I just want to make
sure that we have time for Clarissa and also for a break. So I’m just going to open
up for just like maybe like six minutes the questions or
discussions. Yes, welcome.>>Hi. So just picking upon
this focusing on the commuting and just the contrary point is
just get rid of commuting, right? So I was at a session conference last week and someone was
reviewing work on productivity, and the research is
showing that people also in here is that people
don’t like commuting right it makes him feel
miserable and stressed. On the flip side, when
you look at productivity, people who work from home have
higher productivity gains. Some of what’s driving that
is self-selection bias people who are more motivated and
focused can work better at home. But it’s interesting idea that as we move in and some of
the topics that have been discussed over this past
couple of days has been around the panel
the opportunities for the global workforce to
constricts run knowledge work to dial in and do remote work more. Just alternatives that were
focusing at the moment, we’re making the commute better but instead do we still
need the community. So much work is done for
an Internet connection on a screen.>>It’s actually an interesting
meta comment as it relates to our first deployment
of the study because our first deployment of the study
was with Mechanical Turk Users, who are people that’s
at least part of the time work from home on what they consider
to be Mechanical Turk Home. Still the majority of these people were commuting to some sort of work. So even though they were freelancing, there is still had a place to commute to and come back from
on a regular basis. But I think that’s
an interesting point to consider.>>I think that the reason why the focus on Manager
is interesting too. Because to some extent for Manager interpersonal interactions
will only be needed. So I think you’re right, and I’m very familiar with the study on commuting and working from home. For some professions,
maybe commuting won’t be needed but my impression is that for managers perhaps there
would never be a momentum of total elimination of
going to work and showing your face and seeing
what people think.>>Don?>>I think the studies that you’re
doing are really important. It’s actually quite complex. We’d been doing studies similar
and we’ve ended up with a three by three matrix and even
that’s an oversimplification. Because first of all, the kind of vehicle matters. So we’ve been looking at in theory completely driverless
vehicles, no driver. Therefore, we could configure
the interior space and we asked people to configure it for what
activity you would like to do. If you put people in a bus, that restricts the kinds of
activities they can do and in an automobile with the standard
parallel seats also restricts. What we’ve discovered
that there’s a difference between short trips which
might include commuting. Medium length trips, which might be an hour or more
which for some is commuting, and long trips, which might be
driving San Diego to San Francisco, 10 hours or so. Then it depends whether
you’re driving, whether you’re alone or
whether you’re with a group. There are two kinds of groups. One is a group of strangers and
the other is a group of friends. So we discovered the activities
vary a lot depending upon that. On the long trips, there was never a single activity, I need to work for the first few
hours but then I want to relax, I want to watch a movie etc, etc. So it’s actually a really
fascinating topic and I think it’s going to make a really big
difference as we move forward.>>So thank you. Yes, we are
thinking of this complexity. There is a lot of data
that we collected and are analyzing and didn’t share it in the last 10 minutes of presentation. Some of it has to do
with the fact that, yes; you are traveling with
people and we are enquiring, who are the people that you’re
traveling with in the car. Our focus is on the work context
rather than the travel to San Diego to LA or San Diego
to San Francisco contexts. We ask people explicitly to think
about a representative work day, That was the context of our study. I agree that it’s important to study all these different conditions, I also want to mention something about the study that
Andrew showed before. It’s also important to study
them in different contexts. So the study that Andrew show
of Bastian’s work was made mostly in Germany, where currently planning
perhaps multicultural perspective. We would like to study
this in Germany, in the UK, in the US, perhaps with managers in China. So culture is also
one important condition. Work culture as well
as other aspects.>>Yes so I’m going to just give you one second because that’s
going to be the last comment. But I just want to say, so the survey that [inaudible] is talking about. We’re actually very open to sharing it because we would
like to get as many users to participate in it
and be able to look these cultural differences as well as even just geographical differences.>>Okay so I remember when Jiang Li come to our Institute
of give us a talk. One of the things he talks about is, eye off the road is really dangerous, and I have seen two speakers talking about AR, the
Augmented Reality. Some people may believe that
once you’re looking two views, you’re not doing
the eye off the road. But I know that this Augmented Reality idea concept
has been tried decades ago. They tried it on a pilot
and didn’t work. The number of the vision research
have been showing that. We have a two views together, you’re not going to really see both. There’s lots of change blindness or inattentional
blindness happened. So that means simulate
person the two views, they may still don’t see
many things on the road. So it’s very important that you
think about human interface. How are you going to present
those information so that they are not going
to miss one of the views?>>I agree, I just want to mention. I fully agree but I just
want to mention that we already have Augmented
Reality in the car. If you’re driving a car that
has kind of a heads up display, you have Augmented Reality
on your windshield. So this is happening and we need to make sure
that this is happening safely for supporting
other tasks beyond driving. We also have to develop in our lab together with Andrew and his students,
were working on that. We also have to develop the technologies to study what
we’re doing and to make sure that we understand where
people are looking at when using Augmented
Reality for different tasks.>>I totally agree, and it’s also worth
mentioning that it’s ideally truly Augmenting Reality. So it shouldn’t be two views
while you’re driving. Right? It’s not just a simple
display that you have to look to. In addition, hopefully it’s
actually the work part. You’re totally right, you
have to turn it off, right?>>I’m very, very sorry
but we’re going to, because we’re really getting
close to break time.>>All right, thank you
for joining us virtually.>>But we’re going to
have lots of opportunity to talk and I want to make sure I really give Satish an opportunity
to basically do this talk. So let’s thank for it. Thank you very much for it. Satish, go ahead and take it away.>>Mic is on?>>Yes, is it?>>Yes, I think so. Good morning
again everybody and thanks Linda for this opportunity. I’m Satish, I’m from
Microsoft Research India. We have a lab in Bangalore which
is in the southern part of India. Today I’m going to talk about
one of our ongoing projects in the lab called HANS and I’ll come to what HAMS stands for in a second. This is joint work with my colleagues Akshay and Venkat researchers in Microsoft India and a bunch of very smart research fellows and interns who we have
working on the project. So the motivation for
HAMS was basically, I don’t know how many of
you have come to India or have experienced how there’s
too dry one roads in India. But if you have you
will quickly realize that there is a serious problem that you face in
that part of the world. India has probably among the highest straight off fatalities
in terms of road accidents. Road safety is
a really critical issue and that kind of became
the starting point for HAMS. When we looked at road safety, again given the conditions that
exist in our part of the world. I’m pretty sure, like
him I’m also going to be dead by the time driverless cars
become a reality in India. But when we looked at road safety we kind of start
focusing on looking at doing things which could
potentially improve driving practices by focusing on
driver and driving behavior. That really led to HAMS being born. So HAMS really stands for,
this should work, right?>>I’ll just [inaudible] I’ll
be your clicker [inaudible]>>All right, now we
can start [inaudible]>>That’s very strange.>>[inaudible] yeah, that’s
the one. second one.>>Okay. Let’s try
this one more time.>>Right so HAMS
essentially stands for Harnessing Automobiles for Safety and the core idea there was to use technology in a way that
could be easily integrated with existing vehicle
populations in India and use techniques
which could then look at monitoring driver behavior
in a useful way, right? So when you look at easy no
retrofitting in existing vehicles, the most obvious device
that obviously came to light was a smartphone because everybody obviously
carries a smartphone. So the core idea
behind HAMS is to use camera based sensors which are available in off the top smartphones. Off the shelf smartphones and use that as a device within the vehicle. Effected within the vehicle and to monitor driver behavior
which can lead to hopefully more responsibilities of driving and improving overall
driving practices in India, right? We also have developed HAMS techniques which could because this involves a
lot of video processing. HAMS also allows for
each processing so we don’t ship all the video that’s recorded by the camera sensor in
the vehicle back to Cloud. We basically generate
incidents so that only relevant video snippets
which are relevant to the specific incidents where
something is to be alerted or supervised can be processed and sent back to whoever is
looking at that data. So we don’t need to really use expensive Cloud connectivity kind
of scenarios because in India, it’s also a reality that many places
don’t have connectivity. So it’s important that we don’t
take for granted not going to be able to ship a lot of video back to Cloud for efficient
processing, right? We start out with a pilot
that we started in our own office promises in MSR India. We have these shuttle just
like you have here which ferry employees back and forth
from their homes to offices. So we started out HAMS by fitting
some of these vehicles with the smartphone based
camera sensors and we’ve recorded about 25,000
plus kilometers, that’s about 15,000 miles
of data to this project. As we speak now, we are actually
also undertaking a pilot project in partnership with
the government of India and I’ll talk about that more in
the following slides. So this is essentially
what a HAMS does. So as you can probably imagine, the smartphone is featured inside the vehicle and it has
two cameras operating. So we operate both cameras
at the same time. So the front camera of
the smartphone essentially looks in into the car and
monitors driver state. The back camera which is on
the back of the smartphone, looks out of the car
and looks at what’s the driving environment
and of course using other inputs like the GPS and the other sensors
which are available, we are able to triangulate
and get inputs to get a sense of driving
contexts and within that context look at driver behavior and hopefully having dimensions which can make
better driving practices. So these are some of the detectors
that we’ve built within HAMS. So we’ve looked at distracted driving and in India it’s
a very common problem. Although states do
disallow cell phone use while driving it’s fairly common to have people talking on telephones. So the ability to tell
this can actually detect cell phone usage while
driving.We build detectors that can look at fatigue indicators
such as yawning and eye blinking and we’ve also built detectors which can
look at gaze tracking. So that’s the driver and this is
again extremely common in India. Culturally people do
not have a culture of really being safe while approaching intersections
and things like that. I’ll be looking at mirrors before they’re supposed to
take a left or right turn. Those are also things that we want
to kind of look at and enhance. This is the driving context
which is done by the camera, which is on the back of
the phone looking out. Using inputs from the on-board diagnostic device which
I retrofitted into cars, we can look at things like
speed and exhibition data, and triangulate that with
the environment outside. So look at things like
ranging and tailgating using positional data that you get
from the outside environment. Also, it provide inputs via HAMS in terms of
safe driving practices. So that’s context setting. When we looked at scenarios in
which you could apply HAMS, so a few of them came to light. We could look at driver training. In India, there are
these driving schools which train the likes of Uber and Ola, which is the Lyft equal in India. These drivers go to these driving
schools and get certified, and using that certification, they’re able to basically get job opportunities in good
companies like Uber. So that’s one application that
obviously throughout for HAMS. We also got a lot of
interest from companies, trucking companies as well as ride-sharing companies in
terms of fleet monitoring, because they were
interested in making sure that they get the drivers
who drive safely, and therefore obviously
there wasn’t person harmed, and they’re having
couple of conversations, [inaudible] companies on that. But really the scenario that we
have now focused on to start with is what is called as license
testing, driver license testing. This is a project
that we’re doing with the government of India and the Ministry of Road,
Transport, and Highways. So this really involves monitoring the driver as he or
she is taking a driving test. So this is the equivalent of the DMV, I think now in the US. So people go to
this entity called RTO, and they take a driving test. Today, the situation in India on license testing is completely
broken. There are statistics. Again, I don’t know how
many of you within India, but the manual process that is used in the driver testing
scenario is prone, it’s slow and tedious of course, it’s prone to corruption,
and other such issues. There are data points which are as shocking as
something like this that, more than half the people in India apparently have got driver’s
licenses even without taking a test. There are multiple reasons
for the scenario, because that system is completely
non-technology-driven. In this process, there is now
opportunity that we see where we can bring automation via
HAMS-based intervention, to make the driver testing
process transparent, efficient, and obviously
more proactive all in all. So that’s the call right
now, effort with HAMS. So what we aim to achieve with automated driver license
testing, in this case, what we do is basically
we don’t have someone physically monitoring
the driver as a person, but we have the smartphone. Using HAMS inside
the smartphone in the vehicle, we look at a scenario where we can both increase the coverage
of the driving test parameters. It can obviously reduce
cost, and most importantly, make the whole process of issuing a driver license clean
and corruption-free, which is a huge issue in India. This what HAMS does. So when a person wants to
take your driving test, the smartphone is
fitted inside the car. Using very minimal
track instrumentation, that’s a typical driving test that
a person takes in the DMV wall. Using very minimal instrumentation, we can basically retrofit this
technology into our environment, and make sure that we bring about improvements and
productivity increases in the driving test
scenarios in India. So what we also achieve
through HAM is that currently, this is the sample of
the parameters that are tested, and I know it’s an eye chart, but there are about 20 parameters
that are supposed to be tested in each driving test before issuing
someone a driver’s license. What currently happens is, although that includes things like mirror scanning and things that, for today someone sitting next
to the driver inside the car, it’s impossible for
that person look at whether the person scanned the mirror before he or she took left or right turn. So even if someone present today, the current systems unable
to actually do a good job of the parameter that are supposed to be tested as part of
driving license test. But with HAMS and with the detectors, now that we are hopefully
been bringing into the play, we can actually increase
the overall efficiency of the system because of doing
things like gate stacking, etc. So it makes for an obviously
simple and low-cost processes, but also increase coverage and
well-tested and safe drivers, who will hopefully promote a better
driving practices on the road. So this is our simple video. I mean, this is
a simulation that we did as we did our testing of
what really HAMS does, and really give you
probably an idea of what we’re trying to
achieve in this scenario. We can play the video. So there’s a white [inaudible] , but I can quickly, as
the person gets into a car, the smartphone basically
identifies the face, and make sure that the person who is taking
the test is actually the person. Because in India, there’s a
lot of issues where people, someone takes out the form for the driving license, and
someone else take the steps. There are also these issues, but this solves that kind of stuff. [MUSIC]>>Showing whether the driver
is looking straight, or at the left or right mirrors.>>So the left screen basically
detects the outside environment, and the right screen detects what’s happening on
the driver’s seat. [MUSIC]>>It will also show a score
at the bottom right, which is updated each time
a violation is detected. The driver is
approaching a left turn. He looks straight while
performing the turn without scanning the mirrors
as he should have. HAMS detects that
the driver is rapidly approaching a vehicle
directly in front. That is, there is an obstacle. The driver approached
the obstacle at a high speed, and then break sharply. He then shifted lanes to the right
without scanning the mirrors. [MUSIC]>>HAMS detects various tag
locations along the track. In this case, it detects
a pedestrian crossing. Here, the driver brakes
abruptly and scans to see, but only to his left
before proceeding. [MUSIC]>>So in the interest of time,
I won’t play the whole video, but you get the general idea, I hope. This is how basically it is. Yeah. Yeah, that’s an obvious one. So this is what we’re
doing right now. As we speak, as I said, we’re doing this pilot with
one of the states in India, in the Northern part of
India called Dehradun. The RTO, that should be
the equal of the DMV, is now actually testing HAMS to
be deployed at that facility, and we’re pretty hopeful that in
a month’s time, mid-August time, we’ll actually have
HAMS’ best solution actually issuing out automated driver
licenses in India very soon. So we hope to have a progress there. It’s also working with
other government entities. That is an organization called CIRT, which is Central Institute
of Road Transport which does technology recommendations
for various DMV equivalents in India to adopt technology. So we are working with them to validate HAMS as part of
their recommendations. We’re hopeful that in
the coming months and years, we’ll have automated diver license testing in India
becoming a reality, which will hopefully promote
safer driving practices, and hopefully less fatalities caused because of
all the accidents. Thank you.>>Satish, thank you.
Given that it’s 10: 45, and we actually have to get notes. I’m going to ask, I’m going to
change this around a little bit. I’m going to apologize to Satish, that we’re going to wait for
questions until afterwards. What I’d like the group
to do right before they go out to get coffee right now is, everybody, right now,
take a sheet of paper because I know everybody’s
writing something down. Take a sheet of paper,
write something down, and place it up there, and then go get coffee
while we look at.>>[inaudible]>>We really.>>I have a question. So Satish.>>I can give the answer.>>So Eric mentioned this too, the idea that you have, how do we affect policies.
So how did you do it? Because it’s not simply
developing the software, it’s getting someone to accept
it to make it an official thing.>>Yeah. So one of the things
that we started doing was evoked with one of the largest car
companies in India on this, because they showed interest. It so happens that that company has a joint venture with
the government of India for setting up a Driver
Training Research Institute, which promotes
safe driving practices. So we started out with that pilot, and through that we got connected to the Ministry of Road Transport, and we met the minister
in charge of this, and he was pretty excited
about whole thing. He said, “Let’s do a pilot and
then see whether it works.” That’s how things got done. So yeah, I agree with you. It’s a tough job to sell
this to the [inaudible].>>Let’s give Satish
a round of applause.

Leave a Reply

Your email address will not be published. Required fields are marked *