Making Interaction More Natural, One Device at a Time: The Story of ‘We AR Sight’
來源:SIGGRAPH大會 | 2019年3月21日
翻譯:Turing
原稿鏈接:https://blog.siggraph.org/2019/03/we-ar-sight.html/
Indian inventor and entrepreneur Sarang Nerkar with We AR Sight at SIGGRAPH 2018 on Indian Independence Day (15 August)
What started with Open EyeTap, the world’s first open source, augmented reality headset that you could make at home for less than $250 — and that Sarang Nerkar invented alongside Steve Mann, and used as the premise of his thesis at the University of Toronto — eventually became “We AR Sight.” The AR headset prototype was shown as part of the SIGGRAPH 2018 Virtual, Augmented and Mixed Reality program in Vancouver. We sat down with Nerkar to learn more about the device and how he hopes to continue advancing assistive technology.
SIGGRAPH: You presented a prototype of the “We AR Sight” device on site at SIGGRAPH 2018. Share a bit about the product and what it does.
Sarang Nerkar (SN): “We AR Sight” is an augmented reality headset for people that are visually and cognitively impaired. At SIGGRAPH, we hadn’t included cognitive impairments, but now we are focusing on those, too. We realized that the problem we are solving is people not knowing or understanding what or who is in front of them, and that issue is common among people that are both visually and cognitively impaired. For example, we see this in people with Alzheimer’s or dementia, so why limit ourselves to just one issue? The device tells people with either impairment who or what is in front of them and how to reach that person or thing. So, if I did a gesture right now, the device would tell me that you are in front of me, and explain how I could get to you. If there were text, the device would read the text.
Why limit ourselves to just one issue?
Something that we’re working on right now, which is very exciting, is to have the text, or the system, take language diagnostics. For example, in India you have different sign boards and different languages depending on where you go. In Mumbai you would have signs in Marathi, and in Delhi you might have them in Hindi, and a lot of places also have signs in English. But, not everybody speaks English, so there’s a whole cocktail of languages going on and, in practicality, the language that a person usually understands is typically one, maybe two. For those who are visually impaired, it’s probably only one. What we’ve done is, regardless of what language the sign or text in front of a user is in, the device will translate it and read it back to the user in whatever language is most comfortable. This way, we are really expanding the horizons of what visually or cognitively impaired people can do in day-to-day life. These features aren’t very unique — like facial recognition, object recognition, or other common AR applications — and we have not invented them on our own; however, we’ve packaged them into a device — a low cost device, I may add — in such a way that it actually impacts people’s lives in advance of how a future, fancy AR headset might.
There are a lot of amazing AR headsets. Like Magic Leap or Meta, which recently shut down actually. Meta was founded out of our lab at the University of Toronto, and the issue [I think] they encountered is that there was a lot of talk and very little delivery. That was something we wanted to challenge [with “We AR Sight”]. We wanted to keep it very simple, deliver it to people, and actually get opinions from those people. That’s what “We AR Sight” is all about. This is the older headset that we had.
We wanted to keep it very simple, deliver it to people, and actually get opinions from those people.
We also are working purposefully to make them be “cool looking.” If we went wtih regular looking headsets, we realized people might not like them because they’re already quite excluded from regular social interactions. We want “We AR Sight” to be as socially comfortable, and, quote-unquote, as normal as possible.
As we move forward, we hope to make everything smaller so as to look more like a regular pair of eyeglasses. Currently, we have been able to make each pair for about $300.
SIGGRAPH: You mentioned languages — Indian as well as English. Are you also creating translations for Spanish-speakers or other languages worldwide?
SN: For now, we are stick to what I would call “scripts.” India has multiple scripts, and maybe 1,000 times that is the number of languages. So it’s practically impossible to include all of the Indian languages, but we want to include all of the scripts, since that is the way through which each language is written. The rest is just interpretation. I can write English and Spanish in the same way and you would be able to understand the Spanish because you speak Spanish and I would be able to understand the English because I comprehend English.
Right now, the scripts we’re going for are Devanagari, which is probably the most widely used script in all of India and covers Hindi, Marathi, Sanskrit, and a little bit of Punjabi and Gujarati. Another script that we’re working on is more South India-based, and encompasses Tamil and Kannada and Telugu. Devanagari is main focus right now as that is the client base that we’re looking at when it comes to the regions where we want to distribute.
SIGGRAPH: So the scripts you mentioned and the Roman alphabet are available right now?
SN: Correct. As long as you’re able to read something, I feel like in India you can comprehend most of the languages if you can hear them. So in Devanagari, people can understand most of Marathi, even if they speak English, but they wouldn’t be able to understand it at all if it was written in English and they are used to speaking, so there’s two things happening. One is translation, and the other is reading in a different script itself, which is already a big task.
SIGGRAPH: Knowing that, are there plans for “We AR Sight” to, in the future, release more widely and have more people using the devices?
SN: Absolutely, yeah. We’re making a global product. Globally, we want to do it on a not-for-profit basis. We want to enable the formation of various companies all over the world that then use our technology and provide services to people all over the world. We cannot to take up the task of bringing “We AR Sight” all over the world on our own, but we want to provide the technology to others all over the world so that they start doing this with us. Right now, we’ve made the device very basic. It uses a regular speaker instead of bone-conduction speakers, and a regular, 4-megapixel camera instead of the more widely available 16- or 21-megapixel cameras. The reason for that is to bring the cost to a minimum.
SIGGRAPH: Have you considered making the model both for-profit and not-for-profit, almost like how TOMS Shoes is set up?
SN: Yes. And that will really help us expand all over the world. It’s sort of like what people have done with the Raspberry Pi, but more customizable. The basic structure is what people would get for free. From that, they can add bells and whistles, and make it into a better-looking product. In that way, we won’t have one “We AR Sight” but maybe a few thousand different versions of it that people will be able to buy at different prices. That’s really the goal here and is, I feel, the best way to make it global.
SIGGRAPH: What manufacturers, if any, do you work with to produce devices, or is it all in-house right now?
SN: We’re making them all in-house right now. I’ve started a little lab in my house, a research lab, and that is inspired by the lab of my former mentor, Professor Steve Mann. A big issue that we noticed in Canadian academic institutes is that if someone wants to do research they need to go through a bureaucratic process. Steve had a very simple concept: If you meet me and you’re brilliant, then you can work in my lab. He had one in the basement of his house that anybody could go to. Even if you were an eighth-grade dropout… and he had someone like that named Kyle! That person was probably a better fabricator and better machinist than anyone else who came to the lab — even Ph.D. degrees or double Ph.D. degrees — he was better than everybody.
If you meet me and you’re brilliant, then you can work in my lab.
I wanted to bring that culture to India, where a “Kyle” over here could come innovate regardless of their background, regardless of who they’re associated with. As long as you’re creative, you should have the space to turn your idea into a prototype or product. That’s the goal with which I started over here and that goal also includes capability of initial production.
SIGGRAPH: “We AR Sight” aims to enable Indian society realize the potential of wearable computing and augmented reality. What inspired this mission?
SN: The goal is to bring technological development to India. One thing that I experienced while growing up in India was that I always got technology a lot later than my cousins living abroad. I used a cell phone for the first time in maybe 2003 or 2004, when a Nokia was released. By that time, much of the rest of the world had already had exposure to cell phones. Same for the internet; we experienced it a lot later. Cell phones have really only become common in India in the last 3 or 4 years. Before that, people in villages didn’t really know what smartphones were.
I come from a privileged class in India where I had access to technology. I had the capability, and even the right, to imagine going to the University of Toronto and working with leading scientists. That was an option for me. It’s not even an option, or even a consideration, in most peoples’ minds here. So, for me, the best way to facilitate change is to bring that culture, that sort of capability, here in India. So that’s the reason we’re doing it in India.
With regard to assistive technology, that’s primarily because of Steve Mann. He came up with the concept of wearable computing in the 1970s and is known as the world’s oldest cyborg. The reason he came up with the technology was to help him out in day-to-day activities. I feel like that is something that we’re deviating from today. We’re making augmented reality, but it’s not really solving any problems. My goal, and this was part of my thesis with Steve as well, is to make augmented reality or wearable computing need-based, rather than want- or desire-based. If it’s solving a need, then I want to work on that problem. One of the primary technological advances that leads to need-based technologies is allowing someone to do something that they, as human, weren’t able to do before.
If it’s solving a need, then I want to work on that problem. One of the primary technological advances that leads to need-based technologies is allowing someone to do something that they, as human, weren’t able to do before.
I don’t care how cool-looking a device or output is, as long as it’s providing information that I wasn’t getting earlier. For me, it’s putting more focus on information and less on want- or desire-based factors. It should still look cool and sexy — and should have great graphics — but I shouldn’t be spending more time on graphics and less on information, because that sort of defeats the purpose of why we need augmented reality: making it useful, and making it in an affordable and scalable manner.
SIGGRAPH: Let’s get technical. What was it like to develop “We AR Sight” in terms of research and execution?
SN: We had a team of six people, and these guys were just basic inventors. They didn’t have many skills when it came to 3D printing, etc. So what I did was, based on their prior interests, gave them a lot of learning material. Luckily, because I had just come from the University of Toronto at the time, I had access to a lot of online learning tools that people could use. I sort of commissioned one person for 3D printing, saying you take a month and figure out how to design these. And that person really figured out how to design something. And then, these six people who taught themselves new skills were training others and we suddenly built an in-house training team. Because this is a trust, we don’t envision people staying with us long-term. It’s all about transferring skills, learning more and sharing that with the next group.
In terms of electronics, we went for off-the-shelf initially, which is what we presented at SIGGRAPH. Now, we’re into our own customized electronics. In terms of 3D printing, we already had our own design in the mechanical structure. We started off with a welding goggle and a depth-sensing camera on top; I think we used the SoftKinetic for that. Then, we moved on to using the Open EyeTap framework. Eventually, we developed our own modeler system where you can actually snap out and snap in sensors. We’re progressing slowly toward the goal of having a universal system.
The modularity of Open EyeTap should be used for initial prototypes, but after the first prototype, you should really be focusing on that particular thing on its own because every application has its own ergonomics; its own usability factors.
SIGGRAPH: What’s next for “We AR Sight”?
SN: We have people trying the prototype out and giving us their opinion as we gear up towards the first 1,000 that we want to give away for free. Right now, our focus is funding for that 1,000. We are fixated on that scale because it will best enable us to improve. At that scale, we can go from 3D printing to full-scale injection molding. Without money, that’s not something that a not-for-profit organization, with only one year under its belt, can really tackle. If we can reach that kind of volume, we’ll be able to tackle problems a lot better and run our programs a lot better.
SIGGRAPH: Regarding beneficiaries, do you work with anyone, like an NGO?
SN: There’s a National Association for the Blind (NAB India) that we want to work with. In order to make the process for getting donations easier, what we have done is allowed current donors to decide their beneficiary. People, at least in India, donate only when there’s a personal reason associated with it. To date, 50 people have given us the funds for devices, and each has someone who is visually or cognitively impaired in their life.
We want to work with the NAB India because, in cases where we have donors but no beneficiaries suggested, we would like to go to the association for distribution. I think, once the first 1,000 devices are delivered, we’ll attract the attention of organizations like NAB India or the Seva Foundation, because a new not-for-profit is not that attractive to partner with for anyone, and delivering an initial set will help us build credibility.
SIGGRAPH: Shining a light on adaptive technology is a major focus of the upcoming SIGGRAPH 2019 conference. What do you see as the future of adaptive tech? What devices, in addition to yours, would you like to see invented and adopted?
SN: Wearable computing and augmented reality have massive potential to make an impact in the space of adaptive and assistive technologies because it’s all about giving people superhuman capabilities. Sometimes the superhuman capabilities are just human capabilities that people are deprived of because of nature. Something that I would really like to see being developed is adapting all of your senses. For example, hearing is something we can tackle very quickly, and it is already being tackled significantly through hearing aids. We don’t have as many visual aids as we do hearing aids; however, the kind of hearing aids we have are very different from hearing aids that could be augmented reality-based. For example, AR-based hearing aids could allow people to hear not only things around them more clearly, but could expand that hearing to things like ultrasonic sound.
Trying to make interactions more natural, as opposed to mechanical, to overcome these sensory-base issues, and creating technology that is personal for the human is what I want to see. When you think of visual impairment, especially in a country like India, you don’t want every sign to have braille on it because, in the long run, that is quite impractical. There are so many sights around us and we never even hear of braille in India, because who has the money to go and replace very sign or add a sign that has braille on it? It is best if adaptive or assistive solutions to problems do not change the environment and instead allow environments to remain the way they are while enhancing a human’s capacity to comprehend the environment. Focusing on developing countries, on the lower end of the income map, is what assistive technology should do. These solutions should be available within the realm of the users — all users, not just privileged ones.
SIGGRAPH: This has been such a great and eye-opening chat. Before we go, can you share a favorite memory from your time at SIGGRAPH?
SN: The best moment would be when I had just concluded my presentation about “We AR Sight” and questions started coming in. It was very overwhelming, actually… we had to cut the room off at one point because there were so many questions. That made me feel very happy. The room was quite jam-packed and I had initially been a little confused, a little insecure, about introducing “We AR Sight” at SIGGRAPH because, in my head, [the conference] is all about visuals — amazing, beautiful visuals — and here I was with an AR headset that doesn’t show you anything. I was really worried that people wouldn’t be as interested. (And also because it was a very rudimentary prototype compared to others.) It really was the best, though, to have people appreciating and clapping at the end of the presentation. Some were even whistling, which was very cool.
熱門頭條新聞
- The CES® 2025
- ENEMY INCOMING! BASE-BUILDER TOWER DEFENSE TITLE ‘BLOCK FORTRESS 2’ ANNOUNCED FOR STEAM
- Millions of Germans look forward to Christmas events in games
- Enter A New Era of Urban Open World RPG with ANANTA
- How will multimodal AI change the world?
- Moana 2
- AI in the Workplace
- Challenging Amazon: Walmart’s Vision for the Future of Subscription Streaming