Imagine spilling a box full of Lego bricks over a table. Now—take a leap with me—don your imaginary augmented reality glasses. The camera in the AR glasses will immediately start cataloging all the different types of bricks in front of you, from different shapes to colors, offering up suggestions on models you can build with the pieces you have. But wait, someone is at the door. You go to check it and come back. Thankfully, your glasses don’t need to rescan all of those pieces. The AR knows they’re sitting on the table where you left them.

That ability to continuously remember real-life objects that have been scanned is the main pitch of a new AR software platform called Perceptus from Singulos Research. Perceptus can hold those objects in memory even if the camera is not directly looking at the scene anymore. As you walked over to answer the door, the Perceptus platform kept thinking about what else you could build with the pieces on the table. It didn’t stop working just because you were no longer looking at the pieces. 

“When we are in an AR space, we don’t look at the whole room all at once, we only look at a part of it,” says Brad Quinton, Singulos Research’s CEO. “As humans, we have no trouble with the idea that there are things that exist that we can’t see at the moment because we saw them before and we remember them. Once you have AR that can understand what’s around you, it can go off and proactively do things for you.”

At least, that’s the idea. Perceptus acts as a layer above existing AR technologies like Apple’s ARKit or Google’s ARCore, which developers use today to create AR apps. But a lot needs to happen behind the scenes before this can work on your smartphone or tablet. 

The app developer provides Singulos Research with 3D models of the Lego bricks—or any object—it wants Perceptus to detect. The platform then uses a type of machine learning process in which it studies all the different ways it can expect to see the object in the real world, with different lighting conditions, on various surfaces, and so on. Perceptus is then layered over the developer’s app, allowing it to utilize this new object comprehension. It’s the developer’s job to make sure the app actually gives you things to do with the objects, like the way our imaginary Lego app might suggest stuff you can build using the bricks it identifies. 

Object scanning and identification are still very much manual processes. To start, app developers who license the Perceptus platform will need to provide computer-aided design models of the objects they want it to memorize. But those CAD models will be added to Singulos’ library, and future developers will be able to hunt through the digital stacks to more quickly find the objects they need. Soon, Quinton expects Perceptus to be able to identify a swath of common items—especially since there are already “large numbers of very accurate 3D models available” from video game makers. 

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Jeff Bezos’ Blue Origin successfully launches six people into suborbital space

Jeff Bezos‘ Blue Origin has launched its sixth tourist trip into suborbital…

Why Hurricane Laura’s Storm Surge Could Be ‘Unsurvivable’

Having strengthened with astonishing speed into a Category 4 storm Wednesday, Hurricane…

How long will Fornite be down for?

FORTNITE Chapter 3 is almost upon us, and there’s plenty to be…

Tesla driver found asleep at wheel of self-driving car doing 150km/h

Man charged after car discovered speeding on Alberta highway Police say ‘both…