VRguy podcast Episode 11: Greg Merril, CEO, Yost Labs, on Full-Body Sensors

Greg MerrilMy guest today is Greg Merril, CEO of Yost Labs. This episode was recorded on April 20, 2016.

Greg and I talk about the complexities in building single and full-body sensor systems, on whether these sensors will be embedded in regular garments and much more.

Greg has been developing award winning medical and consumer electronic products for over 25 years. He has served as founding CEO for three VC-backed fast growth health-related product companies: HT Medical Systems, Interactions Labs, and Brain Sentry. At HT Medical he led the Company through a $42 million merger with Immersion Corp (NASDAQ: IMMR). Mr. Merril has been recognized as a 2013 TEDMED Innovation Scholar for work on head impact sensors. He is a regional Ernst & Young Entrepreneur of the Year.

Subscribe on Android

subscribe on itunes

Available on Stitcher

Interview transcript

VRguy:  Hey Greg, and welcome to the program.

Greg Merril:     Hey Yuval. Thanks for having me.

VRguy:  Thanks for coming online. Who are you and what do you do?

Greg:     I’m Greg Merrill. I’m the CEO of Yost Labs based in Portsmouth, Ohio and we make very fast inertial motion sensors and we are in the process of launching our PrioVR Dev kit, which is a full-body immersive inertial motion-capture suit.

VRguy:  You and I have known each other for quite a few years and I think you’re fairly new to Yost Labs, so can you tell everyone how you got here?

Greg:     Yeah, sure. You and I go back to the first wave of virtual reality, back in the 1990s. I guess I’ve been working in virtual reality since about ’94, when I was focused on developing virtual reality medical training systems, to train doctors in how to do surgery in a virtual environment so that they didn’t put their patients at risk as they learned. I went on and started creating video game controllers and I remember you and I collaborated on some of that work, which included some military applications where soldiers could walk around using a leg joystick we developed, so they could navigate in virtual environments.

                Just about seven months ago, a headhunter reached out to me, telling me about this company in Appalachia, Ohio called, at that time, YEI Technologies, that had developed inertial sensors and were launching a full-body motion tracking suit… They had a really successful Kickstarter campaign, and they were looking for a new CEO. I looked at it and I saw that what they had in terms of the technology with very fast sensors and it was interesting and then seeing the list of customers that included the Navy and a lot of the technology leaders helped me validate this is the company that’s doing something. Virtual reality, of course at Oculus and all the money that’s going to virtual reality right now seems like the time, finally is here, where virtual reality and augmented reality applications are going to make it. Significantly different than when you and I were working in the 90s. Now, there’s so much critical mass behind this wave into virtual reality that I wanted to reengage in the market and this was a great opportunity. I’m really pleased to join with Paul Yost and pushing forward with this company, these sensors, and these suits.

VRguy:  The PrioVR system is essentially a number of individual sensors that you make that are linked together. Is that correct?

Greg:     Yeah. It started with Paul’s work with robotics. He felt that the inertial sensors weren’t optimized for the robotic systems he was working on, so we developed sensors that had accelerometers and gyros and magnetometers. He used AHRS IMU sensor technology and started packaging those sensors and offering them for sale. Turned out that a lot of customers were buying the sensors and strapping them onto body parts for rehabilitation, for virtual reality applications, and Paul thought, well if they’re going to do that, there’s a better way to optimize the system if you really focus on building a full body immersive suit. You can decrease latency by doing some things with the way the communication protocols work between the sensors and between the sensor system and the computer and that process led to this PrioVR Dev kit which the company put forth in a Kickstarter campaign.

                I guess it was about two years ago and the intent was hey, let’s try to raise $75,000 to put this suit together. The Kickstarter campaign ended up with something like $320,000 of pre-orders, so really demonstrated the interest in this suit and the technology. The company had some bumps in the road and it’s been slow in getting things done. Slow may be an understatement. It’s been almost two years now since that Kickstarter campaign, but fortunately the company has brought in some capital. I’ve joined the company and we’re pushing forward, looking to start delivery of those suits at the end of May. Very exciting time there.

VRguy:  How is the work for a single sensor different than strapping a dozen or sixteen sensors on your body? First, how long does it take to wear a suit like that?

Greg:     That’s a concern, of course, when you have a user requirement, they strap something on. That’s something we’re all concerned about. Certainly, it’s a concern also, with things like even head-mounted displays where you have to put something on. The suits even more of a burden than a head-mounted display. This suit, we focused a bit on ergonomics. Once he’s got it figured out, it really takes about a minute to put it on. First time, it might take five minutes to put the suit on. I think that ultimately it’s not really an end-user experience the current PrioVR Dev Kit. That’s why we call it a “dev kit” because it’s got nineteen sensors. Seventeen in the suit and then two in each of the hand controller pieces that comes with the suit.

                The idea is really to give developers a tool kit so they can create immersive experiences and then we want to team up with those content developers to figure out the right combination of sensors to work with their particular application. We’re talking to some people about things like fencing games and fencing games would be really interesting because a fencer in real life wears a suit, so there you’re really starting to simulate reality by having them put on a fencing suit. It just happens that the fencing suit has inertial sensors built into it.

VRguy:  If I put on the individual sensors and let’s say I put them on, take them off, put them on again, it’s probably likely that they’re not going to be precisely at the same position in orientation second time around relative to the first time. How do you address that?

Greg:     That’s pretty much a solved issue. The suit does requires a calibration, so each time you put the suit on, you calibrate it to your new pose. To really optimize it, because it uses this concept of inverse kinematics. It’s mapping the sensor location to a computer model of your body that is a skeleton of your body in the computer. If the computer knows your height, for example, then it would much more closely map your real life joints to the joints of your avatar in the virtual environment.

VRguy:  Ultimately you think the sensors could be baked into a jacket or suit or something that you truly wear, a garment, and then some of the relative position of the sensors is better known and also, putting it on and taking it off is even faster.

Greg:     Just talking over the horizon, where does this all go? I think that if you look at trends, clearly inertial sensors are going everywhere from back when we first started doing this stuff in the 90s. They didn’t have these MEMs, little accelerometers that were so inconspicuous you could put them anywhere. Now they’re in our little iPhones and all over the place. Looking over the horizon at where does this go, I think it goes in our underwear. I think that every article of clothing, why wouldn’t it have these sensors in there? You can imagine the ability to do pattern recognition in just walking around. If these sensors are built into our clothes and you’re walking around and there’s a little computer that’s monitoring your movement, it can start to predict or detect if you’re favoring your right leg over your left leg, for example.

                There’s value in having these sensors in our clothes just in general for things like that kind of injury prediction. Not just for sports. People use this stuff for sports, performance enhancement. We have people doing it for … the Canadian Olympic runners use our sensors for doing gait analysis of their run. Also, researchers are using this for, talking about sports … just today I saw one of our customers is using our sensors for golf swing system. There’s things like that, once you’ve got these types of sensors integrated into our clothing, than the transition from reality to virtual reality is much easier because you’re already wearing the sensors. Now you just put on your head-mounted display and you’re in a virtual environment and your full body is tracked.

VRguy:  We see customers that decide to build their own sensors, so they take some inertial chip and say: oh here’s a gyro, here’s a magnetometer. It’s actually probably much more complex than that right? That’s part of your business. Could you explain why it’s complex to build your own sensor?

Greg:     The key part of what these sensors, of what makes them useful, is what’s referred to as sensor fusion. It’s looking at the output of the accelerometers which are linear movement measurement devices, and then there’s gyroscopes which obviously measure the rotational aspects. Then, often like with our sensors, we integrate a magnetometer or compass which allows us to provide an absolute orientation direction because we know which direction is magnetic north and then we can tile the other information into that with this sensor fusion. Since the early 1960s, with the space program, there’ve been people working on using sensor output and developed an algorithm class referred to as a Kalman Filter. That’s a way of combining outputs to these sensors.

                The challenge is that that math is very complicated. It’s difficult to optimize … it needs to be kind of customized often for particular applications and so a company like Yost Labs that’s focused really exclusively on solving that problem, is just a short cut for someone who’s trying to do this kind of thing by just buying the components and integrating them. What we’ve ended up doing is actually diverting from what has been an industry standard of this Kalman Filter and developing a totally different methodology for sensor fusion we call Q-Grad and it’s one tenth the computation load of a Kalman Filter. On the bench when we measure the output, the latency, we’re about one third latency of the Kalman Filter running on the same CPU.

                That’s why you come to something like Yost Labs is to optimize that motion tracking.

VRguy:  What’s a refresh rate and a latency that one could expect using your sensors?

Greg:     The refresh rate for our 3-Space sensors is about 850 Hz running our Q-Grad sensor fusion. On that same processor it’d be about 200 Hz with a Kalman Filter. It’s pretty fast. You think about the full PrioVR Dev Kit suit, we’re in the sub ten-millisecond latency with that suit which is pretty amazing when you think about nineteen sensors all fitting through a hub and then wirelessly communicating to a computer. When you really focus on latency, it’s possible to achieve some pretty amazing things with really low cost hardware.

VRguy:  If I were going to a game developer, they would say yeah, I want to minimize latency. Latency is the enemy of immersion in presence but how fast do you think it needs to be in terms of refresh rate? I mean, even when you’re doing eight, nine hundred updates per second and you’re using an HMD with 90 frames per second, you’re already providing ten readings for every frame. Do you think it needs to go much higher or we’re about where we need to be on the refresh rate?

Greg:     Yeah, I think particularly in the area where you’ve been focused with the head-mounted displays, that’s where there’s a tremendous amount of sensitivity around latency, right? There’s probably similar to what we’ve learned over the years with graphics rendering where things in center of vision need to be more highly resolution of things in the periphery. There’s probably the same kind of thing could be done with latency where a head-mounted display head tracking is more critical because of that issue with your eyes want to see what your head is feeling in terms of moving as compared to maybe not as critical with your shoulder. Maybe it doesn’t matter if your shoulder is twenty milliseconds versus fifteen milliseconds.

                There’s that but I think that there will be a point in which it’s fast enough and at that point what we see is, there’s another advantage to being fast. If you’re faster than you need to be, you can start to reduce the cycles on the CPU. Start reducing the load that’s required on the CPU which starts reducing the power requirements. I think one of the things that we’re seeing, particularly with un-tethered head-mounted displays, is that there’s a pain point for manufacturers in how do we reduce the power requirements. How do we reduce the heat buildup. You see that with a lot of the mobile phone based displays. They get really hot.

                Every little cycle of CPU is going to reduce heat and reduce battery power. Once you’ve gotten the latency to the point where it’s good enough, you can start applying that efficiency in the algorithm to reducing the power consumption and heat. I think there’s still a lot of work to be done. I think that we’re not there yet with the latency issue. Overall in terms of a price point that’s certainly not mass market consumer ready but we’re moving that direction.

VRguy:  If you have all these sensors, there’s probably a sensor inside the HMD anyway, maybe I’ve got a sensor on my fitness band. I wear your suit. I’ve got whole bunch of other sensors. Isn’t there a problem of alignment or calibration even beyond the confines of your system because I have other sensors in the room, in the world, on my body that also need to be pointing in the same direction.

Greg:     I don’t know. Are you thinking that maybe people will be wearing a bunch of different sensors? Perhaps, in the future … the fitness bands will go away. If we’re talking about where does this go in the future, over the horizon view, if you buy into this vision that I put forward that these sensors are going to be in your underwear, I think the fitness bands go away and maybe this one suit becomes the sensor array for motion on your body. You don’t need a fitness band if you’re wearing sensors in your underwear. You don’t need sensors for virtual reality interface if the sensors are in your clothing like that. Maybe with this, it kind of consolidates them all in that package but I think in terms of the way that that sensor suit would interact with other sensors, I think each use case needs to be looked at individually.

                I think that’s one of the reasons why what we’ve done with this PrioVR Dev Kit is called a “dev kit” and say this is a group of sensors that will allow us to experiment and develop content but I think it’s difficult at this stage in the industry to define a generalized full-body immersive sensor array. This is a sensor ray. It’s going to work for virtual reality. It’s going to work for rehabilitation and military training. I think each of these used cases needs to be looked at and optimized on its own. Then maybe over the horizon, we’re able to generalize this into okay, everyone wears inertial sensing underwear but we’re not there yet.

VRguy:  As we move towards wrapping up the conversation, let me just pose the devil’s advocate view. How about no sensors? If I want to play a game, even a full motion game, I might have a camera placed on my monitor or TV and the camera can look at my body, so why do I need any sensors beyond perhaps a head tracking sensor?

Greg:     I think that this is a discussion that’s going on in our industry. Which technology is good enough? I think there are issues with current camera systems with occlusion, with the range of view which limits the operating space in which you can work. These inertial suits can work in basically unlimited range and you can have many, many people in the same space without worrying about occlusion. There’s benefit right now to the way the inertial sensors work and there are clearly benefits to the way cameras work in terms of not having to wear any sensors. I think this is a case-by-case use case, what’s going to work for which particular application and maybe eventually it’ll either neck down to this is the ultimate solution or maybe both of these technology paths will remain viable for their own application. Hard to say. I can’t say right now.

VRguy:  Excellent. Greg, where could people connect with you online to learn more about what you’re doing?

Greg:     Please feel free to go to yostlabs.com and if you’d like to reach out to me, I’m at greg@yostlabs.com

VRguy:                  Perfect. Thanks so much for coming onto the program.

Related Posts