VRguy podcast Episode 27: David Oh from Meta discusses augmented reality

My guest today is David Oh, head of developer relations at Meta. Interview transcript appears below the media player. This episode was recorded on Sep 1st, 2017.

David and I discuss the use cases for augmented reality in the office, input with augmented reality and much more.

 

Subscribe on Android

subscribe on itunes

Available on Stitcher

 

 

 

 

Yuval Boger (VRguy):    Hello David and welcome to the Podcast.

David Oh:         Hey Yuval, how’s it going?

VRguy: It’s going great. Thanks for joining me. So who are you and what do you do?

David:  Yeah, so my name is David Gene Oh. I currently lead developer relations at Meta. I have a background in creating video games for companies you may of heard of like Ubisoft. I also worked at a company called Leap Motion, focusing on hand tracking. That’s actually how we met, Yuval. Of course, you know Leap Motion supporting also OSVR, virtual reality HMD. That’s actually how we got to know each other, which is really cool. And I love paddle boarding, how’s that?

VRguy: That’s great. So I think we’ll leave the paddle boarding to another podcast. Tell me a little bit about Meta. I’ve seen the headset, I’ve tried the headset. Where do you see it catching on? What use cases or types of applications that are most suitable to Meta, to Hololens or and Epson or a Vuzix or some of the other different augmented reality headsets.

David:  Yeah, so basically what Meta, what we’re focusing on is purely around the office place. And in particular, office place and how you’re interacting with 3D models. Whether you’re building 3D models, communicating with 3D models, presenting 3D models, building 3D models. You can imagine all the different businesses 3D models today actually touch upon. Everything from architecture, engineering, construction, automobile, aerospace design, as well as simulation, data visualization. All those things require some type of 3D modeling that people can share or view or create. But right now, we are currently looking at them on 2D monitors. Of course, being able to see 3D objects in 3D, there’s a lot more information with that. There’s layers of more information. That’s what we’re really concentrating today. Is ensuring that the office that interacts with 3D models will have a tool that can best present them.

VRguy: So how is that different than some of the Hololens videos that are floating around, where people do exactly the same thing? They walk around the building or car and collaborate on that.

David:  Yeah, so great question. Basically, the difference between us and the Hololens, an AR product by Microsoft, is that we really wanted to focus on our use case around 3D models concentrating on field of view and resolution. That came from Meron, our founder and CEO. He actually did a lot of due diligence when he first started a company, to really ask the right questions to a lot of our first initial customers. You can imagine when the Meta 1 came out, it was a Kickstarter project very early on. But even early on, all the other industries, really the Fortune 50 companies, all focused on some type of R&D around augmented reality.  Those were the early customers, early on even for the Meta one Kickstarter project.

            By listening to those customers, what they had really wanted was understanding around 3D models and how to visualize them. What’s the difference between ours or whatever product currently on the market or currently future on the market, is really our field of view and our resolution.  Our customers have said “Hey in terms of visualizing 3D models, we want to make sure this is the widest field of view possible, none of the holograms were getting cut out. As well as the highest resolution so you can really look really close into the hologram and see all the detail”. That’s what really separates us from any other product out there. You’re able to see the widest FOV and resolution with our headset.

            The other differencing factor, is that we’re a tethered device, we rely heavily on the graphics card. We focus on that because our customers didn’t want portability, they just wanted to make sure they had enough power based on powerful graphics, power from a desktop, to be able to run their simulation or visualize their data. That was the direction we decided to go. Those are the main differences between us and other AR products on the market. All these other AR products on the market also do have their benefits. I think it’s great because with augmented reality so new, we need all of these companies to be successful just so it can indoctrinated, so it can be substantiated that this tool is very powerful even in the workplace and can be a game changer.

VRguy: How do you envision interactions? So, I want to rotate stuff, do I have to hold a wand or is it freestyle just with bare hands? How do you envision the best interaction happening?

David:  Right now we currently use AirGrab, which is our gesture based hand tracking technology. It’s based on point cloud system where you are able to grab objects and manipulate holograms, like for example you can scale holograms by putting your hands inside of the holograms, closing your fists and then it’s almost like a pinch to zoom feature, you spread your hands down and you can actually manipulate the size of the hologram. Or even rotate your hands while you have your hands closed on the hologram, to rotate it to get a better view.

            Of course, a lot of my developers know that hands are still very early. Hands are really good for a sterile environment and we had use cases around healthcare that they really need to make sure that any type of technology that they bring into a healthcare setting or a life sciences setting is really sterile. And of course we rely on your bare hands and not a controller, we offer that capability.

VRguy: Do you envision CAD providers creating a, essentially a connector to Meta. Or what’s the workflow like? Say I have a car or building, do I export them to your manipulation software or does the Meta become embedded in the preexisting CAD program?

David:  Yes, a good question Some of the CAD partners that we’re currently working with either utilize Unity plugin system, where you can actually view CAD models through a Unity plugin which are currently available on the Unity app store.. We also support unity, it’s an easy way to import CAD models through this process.

VRguy: So it’s still separate programs, right. There may be an easy work flow but can you really go back and forth? Can you change a model and then see it in your CAD program or is that reserved more for the future?

David:  Right now we currently have some partnerships that we haven’t announced yet. These are probably the leading CAD partners in the world. They are very interested in our technology as well. They’ll be some announcements regarding how that actual user experience will take place, which I think is pretty phenomenal and pretty seamless. That information is going to be coming out soon. What I advise your listeners is to register on our blog and you’ll probably be the first to get updates regarding, to come to our developer program, to get this type of information.

VRguy: What do you think, in the context of CAD, about light field displays that, supposedly, allow you to focus near and far and sort of get a cleaner view… or a more natural view of the model?

David:  I think light field technology is pretty awesome. But at the same times, I think in terms of what the market is ready for today and the terms of what is available today, I think what we have is definitely, once you try the Meta headset, and see the resolution and see the field of view, some of those things regarding light field technology may not be the best suited for every project. I don’t think that’s very far away even for any augmented reality’s company’s future to incorporate light fields. As you can imagine, anyone interested in computer vision and augmented reality, light field technology is something that is very accessible to do research on and also build upon. Those are things that I think are really exciting for the future. For any company. But today, if you look at what’s on the landscape today, I’m hearing back, especially from our development community that’s tried all the different types of headsets, we’re poised to be pretty strong in our offering, right now.

VRguy: So if I work at Tesla, and I design cars, I could probably view the model on a 3D monitor, right. Or I could view it through a Meta headset. Now if I view it on a 3D monitor, I have high resolution, I have high contrast because there’s nothing from the real world that’s impeding my many contrasts. But on the other hand, it’s more difficult to do it in a collaborative manner. You can I couldn’t be looking at the same monitor from different directions and view this way. So is collaboration the selling point? Or do you feel that working for Meta is superior to suing a 3D monitor?

David:  That’s great point, Yuval. You’ve brought up a really great point. I think collaboration does play a major factor in terms of the immersive ness or human bonding that would take place of actually previewing the 3D model. That’s something you can obviously do in the context of a 3D monitor, per se, but few people can walk around that hologram, right now. With a 3D monitor people can’t actually interact and pass the models back and forth.  That’s something that we’re really striving for. Collaboration.

            We actually showcase just a snippet of that this past Sundance Film Festival, we’re part of their tech series of showcases, where it was the first time people would go through an interactive storytelling experience where they’re actually interacting with same holographic 3D model. That was really astounding. I was there at the festival giving demos. There were times people were almost on the verge of hysteria or even tears, just even in awe.  Maybe some people were even scared because they didn’t know what this technology really meant. One thing that really resonated, was that it became very personal to do such a thing. This is actually the first time people actually interacted with a piece of digital content. Socially. Around the circular setting where people would enter the room, they would face each other, they would learn about how the brain actually works, this hologram floating in front of you. People were then taking apart their brain, putting the brain back together. But together was the main aspect of it. It was really powerful stuff.

            I think that’s really where our differences lie between 3D holographic AR experiences versus seeing them on a 3D monitor.

VRguy: Even though a monitor would give you, I suspect, higher resolution and higher contrast and you can see finer details.

David:  Sure. With monitor technology its evolved over decades now, right. I think the real disruption there is that you’re also able to look at a 3D holographic model, maybe not as more contrast as you can see on a monitor, but you’re able to actually walk around it or even peer into it very closely and move your head at 360 degrees. Just like a physical object. You are just not able to do that today with any other type of technology versus augmented reality.

VRguy: With the current offering, how much am I able to overlay new virtual objects on existing physical ones? Is that just a nice demonstration or does that actually work these days?

David:  Well, there is a lot of smart people trying to solve just that problem. You can do it with certain third party marker tracking. There’s a company, that escapes my mind right now, which they probably wouldn’t be happy that I forgot about them but, Gravity Jack and they have an intersecting technology where they actually can represent any physical 3D object and then also superimpose a digital 3D object of that same object. So you get the same opportunities of tracking and actually being able to interact with the physical fields of content but also have the benefits of having it be digital. Layering information, manipulating, customizing it at any given time. I think those solutions that you’ll start seeing, really being part of augmented reality, there’s going to be all these different types of technologies, add-ons, upgrades if you will, that you can now interact with physical objects and the augmented ones.  To really give a mixed reality experience.

VRguy: So if we look a little bit into the future, what would you like to see in the next version of your product and what would you like to see third party partners add to the experience?

David:  Great question Yuval. Yeah, really timely. I don’t know when this podcast is coming out but we have some news that we’re going to be offering Steam support even through OpenVR, right. Being able to do so, opens up a plethora of content opportunities then for developers who’ve been developing on SteamVR. To bring their experiences into augmented reality. What I’m really excited about, actually just checked out Google Earth on our headset, running from SteamVR. I also checked out Lucky’s Tale on our headset. It’s pretty freaking awesome. You’re not going to get the quality of color when you’re moving just directly importing VR experiences into AR, of course, because there wasn’t any color corrections made.

            But it doesn’t matter. It just feels like when you look at some of these objects, especially in Google Earth, it just looks like bad ass holograms. It’s like Star Wars come to life with their holograms, being able to interact with them. And you kind of forget that the color quality may not have been as stark and as contrasting as it would have been in VR. That’s what I’m really most excited about because then I can support all the other SteamVR developers as well.

VRguy: So what’s missing in your experience? What would you like? Where do you see the headset going?

David:  Right now we’re really focusing on enterprise solutions for companies that have been, probably had a really extensive VR initiatives at their companies or their factories. Replacing those type of experiences with augmented reality is something that’s definitely a lot more consumable and definitely a lot more user friendly than some of these bigger, really extravagant, powerful VR rigs. Some of these kits can run up to several hundred thousand dollars, right. Seeing progress with some of those opportunities are really exciting to me. I’m working with a lot of auto manufacturers and companies visualizing automobile design.

            But I think what I’m really most excited about, is the opportunities where the idea of work in 3D models and interacting with them can also become gamified in a way. With layers of information that can not only make you more productive but the hook that gets you there is through mechanics of fun. I think you’ll be seeing some stuff coming out of Meta, where you’re seeing a lot of those alliances come into play where a tool can be productive. But also awe inspiring and fun, immersive and has the same draw as some of the fun games that you might play. I think that’s what’s missing in the space as a whole and I’m really excited to be working with developers that really understand this. And that are creating some really awesome stuff for augmented reality.

VRguy: So as we get closer to the end of the podcast, I wanted to ask you about fine motor input. I understand that you can reach out, close your fist, grab the hologram, rotate it around and see it from the other side. But what happens now if I want to take some notes? Do I resort just to pen and paper? Is there a way for me to type or sketch or draw something with precision on an augmented reality model?

David:  Yeah, I mean, I think you already know this as much as anyone else, that having fine input control is super extremely important. For VR and AR screens alike. Even what we’re offering with the Steam support, even though in the beginning we are just going to be offering rendering, we are going to be offering controllers now into the mix. But at the same time, controllers are great for specific actions. With controllers in your hands, with live controllers or any third party 6 DOF typed controllers, but they’re not going to get the accuracy as they do from a mouse, right? And because we’re augmented reality and not VR, a lot of our developers are using the mouse.

            We actually have a partner called Schema that is focusing on architectural experiences. With these architectural experiences its really awesome, you can actually sketch out a blueprint in 3D of a building that you would like to design. You can do that all with the precision of a mouse or you can also interact with your hands by pulling the walls apart with your hands. This dual functionality, that I think is going to be necessary because augmented reality is a productive tool. And of course because we’re AR, we have our developers and users actually use the keyboard. I actually use the headset every day. I don’t have a monitor, except for installing programs, on my workstation. I’m able to type, of course, because I can actually see the keyboard, at any given time. And I use that to do email, slide presentations. I think there’s also just never a day that goes by, that I just don’t become amazed and realize that I’m living in the future, working in this field of augmented reality, just sending out emails on my holographic display.

            All these inputs that I just mentioned are all very important, but they all have their different form factor and also different usage, right. So I think that’s how we’re starting to see how that actually develops in AR because this last AWE conference, we actually showcased some really cool products that were all built in a two day hack-a-thon, which is fantastic. It shows developers how quickly and easily you can build on our platform. But Meron had actually showed off our note taking application, there’s a Meta iOS and Android application. Where you can use your phone as an input. So I think this is one thing we are really excited about, because we’re doing a lot of R & D around it. But the idea of multiple inputs for your augmented reality screens because AR, again, is just another way to see the world with layers and information. Which you can have a lot of tools, like the keyboard, the mouse, your hands, other control inputs, even mobile technology, all coming into play.

VRguy: So that’s your new email signature, right. Send holographically. That would be a first.

David:  I should start doing that. That’s a good idea, Yuval. You always have great ideas.

VRguy: Why thank you.

David:  Yeah, that’s what I have to do.

VRguy: So how can people connect with you holographically or otherwise to learn more about what you are doing?

David:  Yeah, so you can check me out on LinkedIn. That’s probably the best way. Also, contact me, I may not respond for several days because as you can imagine, I get pretty flooded but you know I am very responsive on LinkedIn, I do like to share all the cool happenings around Meta through that channel. Also, Tony Parisi and do the DopamineVR podcast. You can check out the podcast at dopamineVR.com. And you can check me out on Twitter at davidosez.

VRguy: Excellent, so David thanks very much for joining me today.

David: Always a great pleasure

                    

 

Related Posts