VRguy podcast Episode 10: Anush Elangovan, Nod Labs, on Human Intent and Hand Controllers

Anush ElangovanMy guest today is Anush Elangovan, CEO and Founder of Nod Labs. This episode was recorded on April 14, 2016.

Anush and I talk about understanding human intent through gestures and the evolution of hand controls for VR and AR. We discuss the advantages and disadvantages of having a hand controller as opposed to natural interaction methods, text entry for VR and much more.

Prior to Nod labs, Anush worked at Google building Chromebooks, Agnilux, FireEye and Cisco. Anush has more than 20 patents pending or issued. He loves extreme skiing, mountaineering and long runs with his dog.

Subscribe on Android

subscribe on itunes

Available on Stitcher

Interview transcript

Yuval Boger(VRguy):      Hello Anush, and welcome to the program.

Anush:  Hi.

VRguy:  Who are you, and what do you do?

Anush:  I’m Anush Elangovan. I’m the founder and CEO of NOD Labs. NOD Labs is a company founded in Mountain View, and our mission is to communicate human intent.

VRguy:  What do you mean by human intent?

Anush:  Our focus is to track human motion, and to communicate it to the ambient computer around us. We sense your finger movement, or your hand movement, your head movement, and we make sense of it, remove false positives, and try to communicate what that intent is to the computers nearby, which means it could be your head is moving left, and so you could communicate it to the computer that surrendering of virtual reality is seen in your headset.

VRguy:  Excellent. You and I have known each other for quite some time, and I’ve followed the progression of your product. I think you had the Ring, and then the Backspin, and now the new project. Could you explain the progression? Was it just happenstance, was it one big strategy, was it just a continuous learning process? Walk us through that please.

Anush:  It was a continuous learning process. When we started, we kind of new the 50 year problem. We knew communicating human intent is going to be a problem today, five years from now, 10 years from now. When we started two and a half years ago, we were focused on communicating intent to smart things around us. It was lights, or thermostats, or smart doors, and our goal was to build an invisible controller that’s always with you that can communicate with your car door, or use it for presentations.

                That’s how we came upon the first product that we built, which was the Ring, the NOD Ring. The challenge with that, we quickly realized, was the smart elements around us didn’t evolve as fast as we expected it to evolve, and so we used the technology and we applied it towards virtual reality where we needed the same kind of controls, but then we needed a way to move ourselves in a scene. We kind of modified and evolved the Ring to have an analog’s take on it. That became the next product, which was the Backspin, which is still useful, and we still have it available on our website, NOD.com, and it’s useful for people to get simple 3 DOF tracking. That was the second product.

                As people know, in the virtual reality realm know, there’s 3 DOF and 6 DOF tracking. 3 DOF being roll, pitch, yaw, and 6 DOF being roll, pitch, yaw and XYZ, which means you get absolute positioning in a room. We wanted to evolve from the 3 DOF tracking solution to a 6 DOF tracking solution, and we’ve been squarely focused on creating a 6 DOF tracking solution that is not only very accurate, but also mobile. That is what we launched at DDC called Project GOA. It’s a reference design for a mobile phone factor 6 DOF or absolute position of the tracking system.

VRguy:  On 6 DOF technology, let’s sort of jump into the technology a little bit. I know that there are various approaches using cameras, using magnetic sensors, inside out, outside in. Could you elaborate on what technology you’ve chose and why you chose it?

Anush:  Yeah. Our tracking pipeline is well architected. It’s well architected to be able to apply stuff on various fronts in terms of outside in and inside out. Outside in is what we currently are focused on, which is you have a small camera sensor that is a standalone module, you could think of it like a drop cam essentially, that’s looking towards you that’s tracking your headset and your controllers and communicating wirelessly to the smartphone or the computer. The smartphone itself, it could be a smartphone or a PC, so it’s mobile or desktop, but the tracking solution itself can be applied even to the other side of tracking, which is inside out, where the cameras are based inside your HMD looking outside.

                Examples like the the Hololens or similar devices which have cameras that track you, and Project Tango is a good example. Even in that case, we have a tracking solution that can work with an inside out tracking solution and give you position and tracking for both the HMD and for the controllers. They’re in various stages of the road map, but overall our solution is that we will have a position and tracking system for any mode that you choose, which is outside in or inside out, and coming to the technology piece.

                We also kind of are a little agnostic in terms of yes we use a camera, but the initial poses and how we extract that could quickly be swapped into any other technology that’s available, and the higher layer stack of pose estimation, and prediction, and all of that would directly apply irrespective of the underlined code technology. We have architechted in such a way that we are not bound to how the technology is used, or which technology we use, but the core tracking components can apply in various factors and implementations.

VRguy:  What do you see is the most popular use case? Is it casual motion tracking? I go someplace and I want to run an application so I whip out the motion tracker and use it, or is it a fixed installation? What do you see as the most popular use case?

Anush:  I think the initial VR and AR use cases would all be kind of in a personal space, whether it’s quasi-personal on a desktop in your work environment, or in your living room, or in your study, something like that where you have a setup. We expect people to be okay with having something looking at you while you’re tracking, while you’re using and experiencing VR. In the long run, there will be casual use cases of, “Oh, I’m sitting in the subway and I want to enjoy VR,” but there are social constraints about your using VR and not being aware of your environment in situations like subways. Maybe on an airplane it would be okay, but we predominantly feel it is going to be an experience where you’re in front of an area that you kind of control.

VRguy:  I guess to sense hands or fingers, there are fundamentally two approaches. One that requires you to hold something or wear something, and the other is sort of the natural interaction like Kinect, Leap Motion, and what have you. Could you explain your view on where one is more useful than the other and vice versa?

Anush:  Yeah. We think it’s like the keyboard and the mouse. You can always have a tool to do one part of it, and then another tool to do another part of it. Hand tracking, in terms of your hands, and natural motion, and trying to grab objects, it’s very fascinating, it really gets people excited about it. When people put it on and what Leap Motion has done is pretty good with the fidelity of your fingers and skeletal tracking in your hands. The challenge becomes how much can you use it. If you take a step back and see what it’s good for after the initial “oh wow” phase, it’s good for manual navigation, selection, so if you’re having a lean back experience and you’re going to a movie catalog and you want to select something and tap on it, it’s easier to do with just your hands. You’re not keeping your hands up for a long time, and it’s good.

                The other side of it is when you actually want tactile feedback, it falls short. We’ve been accustomed to game pads and remote controls where we just like the feel of a button, and so we want it to respond quickly. Our human motor skills to press a button and in a sub millisecond is something that, it not only achieves some interaction, but also gives some sort of satisfaction. We’ve seen people play with the ring and backspin where they’re shooting some bunnies or something and it gives them the sense that they are actually squeezing a trigger. There’s another experience for that. There’s room for both, and what we’re focused on currently is the tracking with a device. We see that as a crucial component that hasn’t been fully addressed or solved, especially the mobile phone factor, and so we want to solve that really well, and then we’ll see what other things it takes us to.

VRguy:  You mentioned keyboard and mouse. That naturally leads me to ask about textural input in VR. Obviously, maybe I want to select a website, or I need to type in something, using a keyboard doesn’t look like the right solution, definitely not for a mobile environment. How do you see that happening? What do you think is going to be the best way to enter text inside the VR application?

Anush:  That’s a very good question. Two years ago, we started a project where we built our own gestural swipe engine, so you just swipe your finger across a keyboard and it predicts the word that you’ve swiped across, and we’ve built it in such a way that it’s web hosted, so we have a backend that can actually predict your words and send it out. You can use it in a Unity package where you drag and drop it, and you just swipe, and it actually has text that’s predicted and returned to the VR environment. We’ve integrated it to have web pages and things like that. We believe that having the ability to point in VR should be all that’s required for you to get text input just like how a touch on your smartphone, just that touch is enough for you to be able to type, we think we’ll have the same kind of method in VR.

VRguy:  Do you envision the same input methods or the same controllers also being applicable in the same way to augmented reality, or do you see a difference between AR and VR in that regard?

Anush:  I think for AR, more or less it will be the same, but there are some more challenges with respect to perception and the human perception and how you don’t make a presence in AR, there has to be even more tighter latency constraints for you to be able to do all of that to achieve a good experience. I think fundamentally it will be the same. It’s just that there may be some size restraints of controllers so that you don’t block your view, things like that that have to be factored in.

VRguy:  I understand. When I think about things that people hold in their hand or are willing to wear, it used to be a watch, and now there’s a smart watch, and maybe a FitBit, or another fitness band. Do you see these devices converge? Meaning that the fitness band all of a sudden can do virtual reality input, or vice versa? Maybe one of your devices ends up also being a fitness tracker?

Anush:  Yeah. I think we specifically stayed away from the fitness tracker, but we have all the hardware to be one. There is nothing that prevents us from going down that route, but one of the things that we wanted to kind of avoid is making the controller, the device be just the thing that can do a Swiss army knife situation, which is one of the reasons when we designed the Ring, one of the biggest design questions we had was, “Hey, why didn’t you put an LCD screen on it, or an LED screen, and it can give you notifications, and so you’d be able to do that and this?” We quickly realized we’re going down the slippery slope of trying to make it so useful by getting all these little apps kind of like your Apple watch, if you will.

                I personally don’t think it has a good enough use yet, but it kind of felt that way. We decided against having anything that can draw attention to the ring and the experience is all about a heads up experience where you’re looking at what you want to engage with and you’re using your fingers or your hand that way. I think technically it can merge, and I’m sure there will be partners who kind of try to do that, have a fitness tracker that can do gestures of a smart watch that can try to type in the air, things like that, but I think it’s just not going to be useful for the human population in general.

VRguy:  As we get close to wrapping up, when I first saw the Ring, one of the applications that came to my mind was to have it used by people with physical disabilities. I think about using Gear VR, for instance, on someone who has difficulty raising their hand to the touch pad, and so the ring seemed like a non-intrusive, easy-to-use device that just goes everywhere with me. Do you also see that as something that’s happening, or are you really focused on younger generation, gaming, and the mobile experience?

Anush:  We have had a lot of good success stories with the Ring used specifically for that. There were people who had been paralyzed in accidents, and things like that, and they were very happy to hear that they could now type or even engage with their laptops or their tablets by using just one finger to hold it and track. We’re definitely focused, focused is the wrong word. We definitely want to encourage them to use it. We don’t have a focused push into such use cases, and there are some partners that have reached out to us, and we are going to rely on them to make that, because they have more experience and expertise in those verticals to be able to do a better job, but from a technology standpoint, we work with them closely to make sure we’re tracking the finger motion really well and communicating it in a low latency way, so that they can build those experiences.

VRguy:  Excellent. Anush, where could people connect with you and your company online to find out more about what you’re doing?

Anush:  I’d encourage everyone to go to NOD.com, and there we have more information on partnering with us, or buying a developer kit, or our forums, so that should be a good order to start.

VRguy:  Excellent. Thank you very much for coming to the program.

Anush:  Thank you.

Related Posts